-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is global-context-less-secure
a feature?
#713
Comments
Agreed, let's drop the feature. There are a few solutions:
I am strongly leaning toward the latter solution. After the first rerandomization we don't actually need any randomness because we can pilfer some from secret key data. So the loss of security (which to reiterate from other issues about this, is purely defense-in-depth and has no practical value that we are aware of) would only affect users who were constantly restarting the software and then constantly doing exactly the same operations after restart. |
I happened to look at the code of
This goes directly against recommendation of
It's not possible to detect the state of a feature in a foreign crate.
Oh, I didn't know that. But doesn't it mean we can always use secret key data? Why would it be different scenario? Anyway, given your argument with restarting, I think silently skipping is fine. |
The premise behind the randomization API is that you start from a fresh random state on startup, and from that point onward you can "reseed" the RNG using secret key data (since this data is conveniently available on exactly the set of operations that would necessitate rerandomizing). If you want to be thorough, you should also rerandomize on forks. If we exclusively use secret key data (and we should also use message hashes and anything else available), as I'm proposing, then we have a situation where the first signature is always using the same context state. Which for applications like HWWs which are likely to be turned on before each signature and turned off after, means we gain nothing by the randomization. Even if we rerandomize before signing, tihs doesn't help us much if the rerandomization is always the same. The exact random state isn't that important to an attacker; what's important is that the random state stays the same across very many signatures. (Upstream, we have been planning to change the signing code to rerandomize the context using only a single bit of the secret key, since one bit per signature is more enough drift to thwart any timing attack, but is cheap enough that we could do it unconditionally ...... if only we could figure out the mutability/multithreaded issues with mutating the context at all). This is a somewhat practical concern. An attacker who had a working sidechannel attack and physical access to a HWW could power it on, sign a fixed message, power off, and repeat many thousands of times. So we really do want some initial randomness, if we want randomness at all. |
For something like this I'm kinda tempted to try using the clock time or CPU jitter or something. We really only need a few bits of entropy to ruin the day of an attacker who can't learn those bits. |
OK, so if I understand correctly we actually don't need to depend on an external RNG at all if we can figure out how to seed the context on startup - e.g. by providing an API that users can use. E.g. by having a method on One thing I'm unsure about is how global context really works. It's a static, so immutable but if there's no rerandomization isn't it always the same, thus allowing an attacker to sign same thing multiple times with the same context? Or does it internally use atomics to update some random state? |
Right now the global context just never gets rerandomized. We have this enormous issue #388 about this. |
And you are correct that we don't even need an external RNG -- what we need is some seed on startup that an attacker can't control. |
OK, it looks like I've misunderstood the whole issue when suggesting the solution to rerandomization on each operation and I probably overcomplicated it a lot. Do we have any API from upstream to use secret key data for rerandomization or do we need to write it on our own? |
I did a bunch of digging and given that rerandomization is only required at the beginning and for each public key and the cost of rerandomization is larger than cost of context creation I think we can do the following:
Of course we need the stack allocation helper from upstream to do this. It could be also helpful if upstream exposed symbols for context size and alignment which we could read in the build script to create a true static (without heap allocation) for all cases except for dynamically linking a system library which reasonably requires an allocation (there we don't even need the |
That all sounds good to me except for the panic in the no-std-no-rand case. If it weren't for rerandomization then it'd be possible and easy to use a static global context. So it sucks to require initialization/construction. I am tempted to instead go for a model where we
And then finally we have an explicit method to re-seed with new randomness, which we strongly recommend nostd users call. But in practice I'm not optimistic, no matter what API we decide on, that they'll actually do so. Even if we require them to call it, somehow, they may just call it with a fixed string or something, which isn't much good. |
I'm not happy about it either. I was once bitten by effectively same thing but that code was C++ and just called
I hope we don't have to do that since it may affect performance doing so many atomic operations each time we need the seed. Two But we could also have two seeds and atomic with four states:
If we have a global
Isn't it super inefficient without providing much additional benefit? IIRC 2x slowdown.
I consider this painfully obvious and implied.
There's a big difference between forgetting or not noticing that there's such a thing and intentionally bypassing a security feature. If someone wants to do that we might as well keep |
I'm pretty sure atomics are trivial in WASM. It has only one thread so atomics can be implemented identically to non-atomic types. I don't know why this wouldn't be supported.
Sure, we could try to do some cross-thread coordination this way to be a bit more efficient.
(a) the performance should be identical between re-randomizing an existing context and randomizing a fresh one, and (b) even if it wasn't, who cares? Already if users are randomizing then they are willing to take a perf hit for defense-in-depth. We should provide non-randomizing versions of the signing methods for users who need the performance.
Sure. But then let's turn it on by default if it results in a huge ergonomic hit for users in exchange for an intangible hypothetical benefit which can only be realized by non-std-non- |
Of course but then you have to deal with the variant of WASM that actually needs real atomics but the spec is not finalized. (And if you don't deal with it and someone compiles your code using it you get UB.) Anyway, I still haven't had time to check what the exact issue was.
In bitcoin-core/secp256k1#1141 @real-or-random argued that rerandomization between signing doesn't help so it's just wasted performance. Do you think his analysis is incorrect?
That's literally all HWWs. Who is building a |
This seems like a general Rust problem. We have
His argument is that synthetic nonces "do the same thing and are simpler". I believe he is correct. Though API-wise I think it doesn't change things much for us: synthetic nonces involve us taking some extra entropy and feeding it into the signing function, versus context rerandomizatoin which would involve us taking extra entropy and feeding it into the context-rerandomize function. I guess, the benefit of using synthetic nonces is that there's zero perf hit relative to not using them, so we don't need to provide non-rerandomizing variants. But I would still like to update our global/thread-local random state on signing operations, since these are operations in which we have access to secret random data so we ought to take advantage of that.
Do we actually need I think many HWWs actually do have |
Let me make another proposal: in the non |
That sounds like a cool way to move forward! We can always add the API back if we figure out a solution. |
Oh, true, I misunderstood it previously. FYI I have this weird use case for deterministic nonces: generate them using a different technique based on whether the signed transaction is some kind of advanced contract (LN or PayJoin) to prevent accidentally RBF-ing from a restored wallet. |
I think you can do this while still getting "synthetic nonces" which are sufficient to rerandomize the computation. Basically, you'd generate a normal RFC6979 or whatever deterministic nonce. Then you would xor in a 256-bit value where every other bit was uniformly random but every other bit encoded a 128-bit message of your choice. |
It's supposedly used to solve the situation when
rand
panics. However IIUC this panic really comes fromgetrandom
always returning a failure. This should never happen unless the OS is broken (then panicking might be appropriate) or someone used thecustom
feature ofgetrandom
and implemented the feature incorrectly.But even if it's justifiable to have this on some super exotic architectures (which ones?) it looks like it should be a
cfg
rather than feature. It being feature causescargo test --all-features
to not test rerandomization which is weird. We could also useRngCore::try_fill_bytes
and just ignore the errors during rerandomization, so thecfg
wouldn't be needed (unless people need it for performance).The text was updated successfully, but these errors were encountered: