Skip to content
This repository has been archived by the owner on Dec 4, 2018. It is now read-only.

Some cryptography notes #10

Open
lodewijkadlp opened this issue Jun 6, 2017 · 4 comments
Open

Some cryptography notes #10

lodewijkadlp opened this issue Jun 6, 2017 · 4 comments

Comments

@lodewijkadlp
Copy link

First, Go is not the go-to language for crypto. And BLAKE2 is relatively novel.

If any recursive-hashing-patterns-based attack becomes possible, which with BLAKE is slightly more likely than with SHA2, this will fail gruesomely.

For most lifetimes the padding bits will be all 0's, because of your extracounter, so it will be extra easy. After that the value of extracounter is very predictable (even over the network). initialise the counters randomly - just hashed out of the original entropy is still better than original randomness. Just /copying/ the original entropy is better too!

"little risk of overflow" fastrand.go:76 is still a race condition. Why not alternate incrementing the two counters? (I don't think our computers can overrun a 128 bit counter anytime soon, I wouldn't bother if I were you)

"The counter ensures that the result is unique to this thread." then use the threadID, not a massive counter...

Instead of using two huge counters, it would be better to use some of that space to add extra entropy throughout (as little as 1 bit). You may request extra random bits in another thread, and just hash them in. By adding a little bit of entropy like that, you compensate for the reduced security involved in producing more hashes. And, if you ever do overrun a 128bit counter, well, it's fine!

Actually, when you add entropy like that, I would just chose a 64 bit counter and allow it to overrun.

Hashes sure are amazing, to allow this sort of operation, aren't they? 👍

@DavidVorick
Copy link
Member

initialise the counters randomly

Easy enough. Though, it doesn't matter unless blake2b breaks, at which point you'd want to change the base hashing algorithm for this anyway.

Why not alternate incrementing the two counters?

Not sure what you mean, but just alternating the counters only gives you 65 bits total, b/c you'll hit the '0-0' starting place again after just 2^65 operations. Maybe you had some other staggering technique in mind though?

The big issue here is that if we switch to mutexes instead of atomics, our total throughput gets obliterated. The race condition involves a small number of repeating values, and can only happen if you've got 2^63 atomic operations queued and for some reason the scheduler has starved out the one that is going to increment the second counter.

It's unlikely that you get to 2^63 anyway, a 32-core 4 GHz processor drawing one round of entropy per clock is going to take 2 years to get that far. Seems unlikely to me, the extra counter was really more of cryptographic nuance than it was intended for a practical situation.

Instead of using two huge counters, it would be better to use some of that space to add extra entropy throughout (as little as 1 bit). You may request extra random bits in another thread, and just hash them in. By adding a little bit of entropy like that, you compensate for the reduced security involved in producing more hashes. And, if you ever do overrun a 128bit counter, well, it's fine!

The extra entropy added throughout execution could in theory be manipulated by an attacker, it's better to keep the same entropy once you've established enough.

Hashes sure are amazing, to allow this sort of operation, aren't they? 👍

Can't argue there. Hashes are beautiful.

@lukechampine
Copy link
Member

"The counter ensures that the result is unique to this thread." then use the threadID, not a massive counter...

Go doesn't have the equivalent of thread IDs for goroutines. (Well, technically it does, but there's no way to access them without runtime hacks and the Go authors clearly went through pains to keep people from using them.) I wouldn't characterize a 128-bit counter as "massive" either; it's small enough to have negligible footprint and large enough to dispel any unease about overflow scenarios.

@lodewijkadlp
Copy link
Author

lodewijkadlp commented Jul 13, 2017 via email

@DavidVorick
Copy link
Member

Ehh. That’s not really how it works though. Lowering your entropy progressively makes things more guessable. And it’s not unheard of for software to run for several months, we have several cores and I believe this is multithreaded (is it? don’t remember) so “2 years” is not actually that compelling :(

Your servers are also not drawing 1 round of entropy per clock cycle, unless you've got an ASIC. Standard CPUs take something like 1000 cycles to do a hash, so 2 years turns into 2,000 years.

But please, make this library throw an exception after a few months/ many calls. (or just reinit)

Maybe that is the best option. I'll consider it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants