-
Notifications
You must be signed in to change notification settings - Fork 14
Some cryptography notes #10
Comments
Easy enough. Though, it doesn't matter unless blake2b breaks, at which point you'd want to change the base hashing algorithm for this anyway.
Not sure what you mean, but just alternating the counters only gives you 65 bits total, b/c you'll hit the '0-0' starting place again after just 2^65 operations. Maybe you had some other staggering technique in mind though? The big issue here is that if we switch to mutexes instead of atomics, our total throughput gets obliterated. The race condition involves a small number of repeating values, and can only happen if you've got 2^63 atomic operations queued and for some reason the scheduler has starved out the one that is going to increment the second counter. It's unlikely that you get to 2^63 anyway, a 32-core 4 GHz processor drawing one round of entropy per clock is going to take 2 years to get that far. Seems unlikely to me, the extra counter was really more of cryptographic nuance than it was intended for a practical situation.
The extra entropy added throughout execution could in theory be manipulated by an attacker, it's better to keep the same entropy once you've established enough.
Can't argue there. Hashes are beautiful. |
Go doesn't have the equivalent of thread IDs for goroutines. (Well, technically it does, but there's no way to access them without runtime hacks and the Go authors clearly went through pains to keep people from using them.) I wouldn't characterize a 128-bit counter as "massive" either; it's small enough to have negligible footprint and large enough to dispel any unease about overflow scenarios. |
On 13 Jul 2017, at 05:09, David Vorick ***@***.***> wrote:
initialise the counters randomly
Easy enough. Though, it doesn't matter unless blake2b breaks, at which point you'd want to change the base hashing algorithm for this anyway.
It’s much less likely to be broken if you don’t seed it with a large prepend of 0’s. Initialising the counters randomly is a quick and simple fix. Who’s against extra entropy anyway ;)
Why not alternate incrementing the two counters?
Not sure what you mean, but just alternating the counters only gives you 65 bits total, b/c you'll hit the '0-0' starting place again after just 2^65 operations. Maybe you had some other staggering technique in mind though?
Rather than increment one counter and overflowing into the next, increment the counters alternatingly. This would reduce the size of the initial 0-padding.
The big issue here is that if we switch to mutexes instead of atomics, our total throughput gets obliterated. The race condition involves a small number of repeating values, and can only happen if you've got 2^63 atomic operations queued and for some reason the scheduler has starved out the one that is going to increment the second counter.
I'm not sure if this is what I was talking about, but I agree a mutex is not desirable. Ah, I see. I mostly suggested avoiding the existence of any race condition by having a bit say "left = true/false”, if true it increments the leftmost counter, if false it increments the rightmost counter. Such a binary fork-in-the-road should be predictable by CPU, and not cost anything extra.
It's unlikely that you get to 2^63 anyway, a 32-core 4 GHz processor drawing one round of entropy per clock is going to take 2 years to get that far. Seems unlikely to me, the extra counter was really more of cryptographic nuance than it was intended for a practical situation.
Ehh. That’s not really how it works though. Lowering your entropy progressively makes things more guessable. And it’s not unheard of for software to run for several months, we have several cores and I believe this is multithreaded (is it? don’t remember) so “2 years” is not actually that compelling :(
It’s just a tease unwise to test the boundaries of your hash like this. Certainly you should expect an attacker to do better-than-most, and crack at least a few rounds of your hash, and have a great understanding of how to recover random seeds. It’s not a known plaintext attack, but the less entropy you have the more it looks like one.
Instead of using two huge counters, it would be better to use some of that space to add extra entropy throughout (as little as 1 bit). You may request extra random bits in another thread, and just hash them in. By adding a little bit of entropy like that, you compensate for the reduced security involved in producing more hashes. And, if you ever do overrun a 128bit counter, well, it's fine!
The extra entropy added throughout execution could in theory be manipulated by an attacker, it's better to keep the same entropy once you've established enough.
Not just sort-of-unlikely, but also a bit hypocritical considering you’d have to break the hash! There’s no type of bits the attacker could provide that /reduce/ the entropy.
You could solve your entropy-depletion risks by just adding a bit of new randomness every so many round, and while doing it, also solve your “nonce returns to 0” issue. And reduce the 0-padding weakness.
I’d do it, that’s all I’m saying =)
I was never a big fan of entropy-stretching. There’s going to be exposed patterns, the question is, how exposed? And the cryptanalysis we have doesn’t really help you figure out - because we’d reject a hash if it didn’t look good enough to, well, not expose patterns.
But please, make this library throw an exception after a few months/ many calls. (or just reinit)
|
Your servers are also not drawing 1 round of entropy per clock cycle, unless you've got an ASIC. Standard CPUs take something like 1000 cycles to do a hash, so 2 years turns into 2,000 years.
Maybe that is the best option. I'll consider it. |
First, Go is not the go-to language for crypto. And BLAKE2 is relatively novel.
If any recursive-hashing-patterns-based attack becomes possible, which with BLAKE is slightly more likely than with SHA2, this will fail gruesomely.
For most lifetimes the padding bits will be all 0's, because of your extracounter, so it will be extra easy. After that the value of extracounter is very predictable (even over the network). initialise the counters randomly - just hashed out of the original entropy is still better than original randomness. Just /copying/ the original entropy is better too!
"little risk of overflow" fastrand.go:76 is still a race condition. Why not alternate incrementing the two counters? (I don't think our computers can overrun a 128 bit counter anytime soon, I wouldn't bother if I were you)
"The counter ensures that the result is unique to this thread." then use the threadID, not a massive counter...
Instead of using two huge counters, it would be better to use some of that space to add extra entropy throughout (as little as 1 bit). You may request extra random bits in another thread, and just hash them in. By adding a little bit of entropy like that, you compensate for the reduced security involved in producing more hashes. And, if you ever do overrun a 128bit counter, well, it's fine!
Actually, when you add entropy like that, I would just chose a 64 bit counter and allow it to overrun.
Hashes sure are amazing, to allow this sort of operation, aren't they? 👍
The text was updated successfully, but these errors were encountered: