-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transports/webrtc: Implement deterministic certificates generation #12
base: anton/webrtc-transport
Are you sure you want to change the base?
transports/webrtc: Implement deterministic certificates generation #12
Conversation
e251570
to
500acdb
Compare
500acdb
to
492fd50
Compare
/// | ||
/// let transport = Transport::new(id_keys, certificate); | ||
/// ``` | ||
pub fn new(id_keys: identity::Keypair, certificate: RTCCertificate) -> Self { | ||
pub fn new(id_keys: identity::Keypair, certificate: Certificate) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think we need to expose Certificate at all if we're able to derive it from id_keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do that but then it would always be deterministic and would not allow users to put a random one in if they want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think I am getting around to this view. I don't think it is necessary for users to put a random certificate in. The more I think about it, the more I'd consider this a footgun actually. Making this an implementation detail gives us more control over the certificate generation and also makes sure we use a proper HKDF to seed the RNG.
I can't find any mention of |
I've found https://docs.rs/x509-certificate/0.15.0/x509_certificate/index.html but it doesn't allow to configure the |
My investigation so far is that:
|
Just found https://github.com/zartarn15/simple_x509 which looks promising from an API PoV. |
I did it! We have deterministic ECDSA certificates 🥳 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Damn, false alarm! For some reason there is still some hidden randomness somewhere, I just messed up the test. |
This is won't work until briansmith/ring@fe9e4d0 is released. |
I started by sending a PR to |
I haven't really reviewed, but we should be extremely careful that the deterministic way in which the certificate is built doesn't change between two versions of libp2p or one of its direct or indirect dependencies. Since the certificate hash is in the multiaddr, and that you're hardcoding multiaddrs as bootnodes when releasing a version of a peer-to-peer software, the effect of this hash changing would be that someone downloading the software for the first time can't connect anymore. IMO this needs at least a unit test that compares a generated certificate with an expected one, with a huge capital case comment saying "this test shouldn't be removed and do not modify this expected certificate, otherwise code sitting on top of us will break". |
For what it's worth, what I initially had in mind when I proposed deterministically-generated certificates for the spec was to do something like: let mut certificate = HARDCODED_CERT_PREFIX.to_vec();
certificate.extend_from_slice(private_key); // (derived from the libp2p public key in the original idea, but here would be derived from an actual private key)
certificate.extend_from_slice(HARDCODED_CERT_SUFFIX); Where the prefix and suffix are in the spec. Of course it doesn't actually need to be so strict here, because there's no need for all libp2p implementations to agree on how the certificate generation is done, but it would make sure that it's deterministic. |
I did have that in a previous generation but it wasn't useful in figuring out what function of my generation code is not deterministic. I fully agree with you that need to assert against a hardcoded certificate and I am planning on adding asserting against one as soon as we actually have the deterministic generation working!
I am not sure I follow how this is meant to work. Can you explain? Would this certificate be entirely static and the private known to everyone? The struggle is that ECDSA signature require a random nonce for each signature. If you sign multiple messages with the same private key and not change the nonce, you will leak your private key eventually. If Schnorr wouldn't have patented his signature algorithm, we wouldn't be in this mess but so is life :D EDDSA also doesn't have this problem but it operates on an entirely different curve and it is not required to be supported by browsers. Long story short, to deterministically produce an ECDSA signature, you need full control over the randomness source that is used to generate the nonce and that is only possible as of ring 0.17, which is not released yet. |
Sorry got confused. I didn't mean to include the private key in the certificate, but the public key and signature.
In what I had originally suggested for the WebRTC libp2p spec, yes, the private key would be known to everyone. But that's not what I meant here. |
There is not much that is mandatory for it to be valid. I think it is only issuer and validity dates. Is your idea to use hardcoded bytes as a form of template so we don't need to deal with certificate generation libraries? That could work actually! Unfortunately, if we want to use |
Most importantly, it would guarantee determinism without relying on the implementation details of |
How it's silent if we will have a test that asserts bytes? |
The test will fail only if you update |
Right, thanks. |
Stupid me, I already replaced |
Oh I remember why. It is because |
The latest patch proves that the deterministic certification generation works and it turns out we don't need to patch |
This ensures a patch-bump of the library doesn't accidentally break our deterministic certificates.
I've pinned the I toyed around with actually hardcoding the bytes but ASN.1 isn't exactly friendly for that due to this TLV structure. ECDSA public keys and signatures can vary in length, so I would have to compute these lengths and insert them in the correct ASN.1 encoding, then DER-encode the entire thing, run the signature algorithm and build the other x509 ASN.1 structure and serialize all of that to DER. We could use a ASN.1 library like |
Ironically, this requires us to change the assertion but the code isn't shipped yet anyway :D
I thought more about this. Should we introduce some kind of identifier / version that users can pass in for how they want their certificate to be generated? In case ring ever ships RFC6979, it would be nice to "upgrade" the certificate generation to use that instead of seeding an RNG. With a bit more effort, we could probably get rid of the That would be a much cleaner implementation but it will yield a different certificate. Without some kind of version / identifier, we won't be able to provide an upgrade path to users and thus are forever locked into the way we generate certificates. Thoughts? |
I think the upgrade path here is to operate old bootnodes for a while until new versions are shipped that point to new bootnodes that use a different certhash in their multiaddr. As a result, I don't think we need to ship an upgrade mechanism in code here but leave it up to the application. |
I agree. We can just add another method like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feature @thomaseizinger.
I still have to dive deeper into the cryptographic primitives used here.
In the meantime, a couple of comments.
@MarcoPolo given your experience on libp2p/go-libp2p#1833, would you mind taking a look as well?
|
||
let certificate = simple_x509::X509::builder() | ||
.issuer_utf8(Vec::from(ORGANISATION_NAME_OID), "rust-libp2p") | ||
.subject_utf8(Vec::from(ORGANISATION_NAME_OID), "rust-libp2p") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to set these? If I am not mistaken, go doesn't do it for webtransport.
pub fn new(id_keys: identity::Keypair) -> Result<Self, CertificateError> { | ||
let certificate = Certificate::new(&id_keys)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related past discussion #12 (comment)
I am not yet decided on whether we should only support deterministic certificate generation. Rational:
- The certificate is valid till the year 3000.
- In case the TLS private key is compromised (leaked, insufficiently secure, ...) an attacker can impersonate the local peer after the Noise handshake with a remote.
- Thus the impersonation attack surface is increased from 1 (libp2p private key) to 2 (libp2p private key + TLS private key).
Related section in the specification:
Why exchange fingerprints in an additional authentication handshake on top of
an established WebRTC connection? Why not only exchange signatures of ones TLS
fingerprints signed with ones libp2p private key on the plain WebRTC
connection?Once A and B established a WebRTC connection, A sends
signature_libp2p_a(fingerprint_a)
to B and vice versa. While this has the
benefit of only requring two messages, thus one round trip, it is prone to a
key compromise and replay attack. Say that E is able to attain
signature_libp2p_a(fingerprint_a)
and somehow compromise A's TLS private
key, E can now impersonate A without knowing A's libp2p private key.If one requires the signatures to contain both fingerprints, e.g.
signature_libp2p_a(fingerprint_a, fingerprint_b)
, the above attack still
works, just that E can only impersonate A when talking to B.Adding a cryptographic identifier of the unique connection (i.e. session) to
the signature (signature_libp2p_a(fingerprint_a, fingerprint_b, connection_identifier)
) would protect against this attack. To the best of our
knowledge the browser does not give us access to such identifier.
https://github.com/libp2p/specs/tree/master/webrtc#faq
This might be overly cautious. One could argue that people should rotate their libp2p private keys regularly and thus also automatically rotate the TLS private keys.
Also related, people interested in running WebRTC on their bootstrap nodes could as well use the dnsaddr
feature. One would only hardcode the bootstrap node libp2p public key in the client. The client would learn the current TLS certificate fingerprint via the dnsaddr
TXT record.
What do folks think? In addition to the deterministic certificate generation, should we also offer a random certificate code path? Which one should be the default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think providing an option of having deterministic certificate is good. In general there would be two options: 1) generate a new certificate and (optionally) save/load it from disk. 2) derive a certificate from libp2p private key. Users will be free to choose whenever option they see fit. We could add more docs explaining the downsides of each way.
What do folks think? In addition to the deterministic certificate generation, should we also offer a random certificate code path? Which one should be the default?
If that would simplify UX, sure 👍
serde = { version = "1.0", features = ["derive"] } | ||
sha2 = "0.10.6" | ||
simple_x509 = "=0.2.2" # Version MUST be pinned to ensure a patch-update doesn't accidentially break deterministic certs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This crate has 0 stars on Github, 983 total downloads (none this month) and I don't see any popular repository in the authors GitHub.
I am not saying that this makes a crate untrustworthy by itself. I do believe we should have some indicator of trust for all dependencies that we use. Do we have any trust anchor for this crate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have any trust anchor for this crate?
The fact that we're pinning the concrete version and library is small enough for us to review it? Alternatively, we can fork it to rust-libp2p
if that would increase trust.
use rand::{CryptoRng, Rng, SeedableRng}; | ||
use rand_chacha::ChaCha20Rng; | ||
use ring::signature::{EcdsaKeyPair, KeyPair, ECDSA_P256_SHA256_FIXED_SIGNING}; | ||
use ring::test::rand::FixedSliceSequenceRandom; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this meant for production code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Friendly ping @thomaseizinger. What are the next steps here? Would be unfortunate for the work here to never make it anywhere. |
I think we're still waiting for a new briansmith/ring#1551 ring version, which takes ages 😞 That said, maybe we can merge this code somehow with a compilation flag or smth like that. Wdyt? |
I think the proper way of resolving this issue is to use RFC6979, likely by implementing it for ring. That way, we can actually get rid of the randomness altogether and make the signatures properly deterministic. Anything else I'd consider a hack that is prone to break as soon as implementation details change. Once ring supports RFC6979 and all dependencies are updated to that ring version, the API we can expose is simply one that takes a bytes slice as seed (and not an RNG). It is then up to the user to fill that byte-slice with random data, use a HKDF, provide a fixed byte array or do whatever else may come to mind. |
Any updates or news on this or would it be best to wait on RFC6979? /CC @thomaseizinger EDIT: We could also open a new issue over at rust-libp2p about this (if one doesnt exist already) so there is a consistent point of reference. |
0.17.0 has been released |
Description
This PR implements deterministic certificate generation by means of
ring
and thesimple_x509
library.Closes libp2p#3049.
Links to any relevant issues
Open tasks
ring
0.17
rustls/rcgen#166ring
0.17 webrtc-rs/webrtc#335Change checklist