Add RSA-PSS support to chrome.certificateProvider. |
|||
Issue descriptionTLS 1.3 mandates RSA-PSS for client certificates. The chrome.certificateProvider API for Chrome OS is missing this. This means that client certificate deployments using Chrome OS will not be able to update their servers to TLS 1.3. Unfortunately, the certificateProvider API only advertises supportedHashes, and RSASSA-PSS-SHA256 isn't a hash. One possibly is to add supportedAlgorithms with "RSASSA-PKCS1-v1_5-SHA1", etc., and convert supportedHashes to supportedAlgorithms. Another is to just jam everything in as a "hash". Another dimension is whether the API passes pre-hashed digests or the unhashed payload to be signed. At the time chrome.certificateProvider was designed, an SSLPrivateKey took the former. Now SSLPrivateKey takes the latter, so we could pass the unhashed payload to the extension if we like. It depends on what the smartcard's capabilities are. (Perhaps talk to some of the folks we talked to when the API was designed?) For instance, the Java cryptography API only accepts the unhashed input, so a hypothetical smartcard that only exposed that API would require taking the unhashed payload. (One can compute the pre-hashed digest from the unhashed payload, but not vice versa.) That second one could probably also be orthogonal? Note: there is some discussion going on at the TLSWG that might result in RSA-PSS codepoints being split in two. This is largely due to RFC 4055 being a disaster and having two SPKI types. We'll see how that ends up going and whether we wish to allow the smartcard to advertise both separately. Might not be worth bothering since which SPKI is a property of the certificate anyway. emaxx: Thoughts? Would you be able to own this one? We don't know much about that code but can help with the TLS needs. It should just be plumbing. It's not particularly dire (I expect uptake of RSA-PSS in the smartcard world to be glacial), but we will want it so we're not the blocker here.
,
Dec 20 2017
Thanks for the detailed description of the problem.
I'm happy to own the API part of the transition, although at the moment the task contains many uncertainties.
RE "supportedHashes" vs. "supportedAlgorithms":
I agree that the existing field name "supportedHashes" doesn't scale well. It seems more robust to add a new field "supportedAlgorithms". Although more thought has to be put into the exact format - whether it should be a list of strings (like ["RSASSA-PSS-SHA256"]) or a list of objects (like [{"algorithm":"RSASSA-PSS", "hash":"SHA256"}]).
But in any case we'd have to continue supporting the old "supportedHashes" parameter in the API - breaking backwards compatibility unless absolutely required would be quite bad. The "weirdness" of this compatibility feature depends, again, on whether or not we can assume that the separate "hash" parameter can be always present (as, under this assumption, the new API could be viewed as an addition of an optional field "algorithm" that defaults to RSASSA-PKCS1-v1_5).
RE pre-hashed vs. unhashed:
It's good to know that the internal implementation now allows for more freedom here.
But my feeling is that the existing situation, where the data is pre-hashed by the API implementation, is fine. I'm unaware of API feature requests that ask for unhashed signing input. (Although quick googling revealed that, indeed, there were some old cards that only supported ECDSA with unhashed input. Which is surprising - for performance reasons. Also my understanding of Windows Smart Card Minidriver spec is that such cards won't be supported by Windows.)
But I'll double-check this question with our third-party contact that specializes on smart card middleware.
I think there's one more question - should the PSS salt be passed through the API?
The option with not passing it sounds safer, considering that this parameter should generally be random generated (perhaps by the card itself if it supports that).
But we should double-check for any protocol that we could care about and that would require using a specific salt.
(Not directly related, but I've met in TPM 1.2 specification one procedure that was using some non-random and actually meaningful data as the OAEP seed, which is normally defined to be just random.)
,
Jan 17
(5 days ago)
Coming back to this now that TLS 1.3 is shipped. (Sorry, I somehow missed you had questions for me a year ago. :-( )
> I agree that the existing field name "supportedHashes" doesn't scale well. It seems more robust to add a new field "supportedAlgorithms". Although more thought has to be put into the exact format - whether it should be a list of strings (like ["RSASSA-PSS-SHA256"]) or a list of objects (like [{"algorithm":"RSASSA-PSS", "hash":"SHA256"}]).
It should be strings like "RSASSA-PSS-SHA256" or so. A list of objects means a mess of complexity... if the CertificateInfo advertised support for {"algorithm":"RSASSA-PSS", "hash":"SHA256", "supportedSaltLengths": [27]} or {"algorithm":"RSASSA-PSS"}, does that count as support for passing {"algorithm":"RSASSA-PSS", "hash":"SHA256"} in SignRequest?
One of the lessons learned from TLS 1.2, and applied in TLS 1.3, is decomposing algorithms like this is a bad idea. It makes parameter negotiation extremely complicated. It ends up being both simpler (and actually more flexible due to Cartesian product problems) to bundle up all parameters into individual code points. So for instance, the TLS code point rsa_pss_rsae_sha256 means:
- id-rsaEncryption key type in the SubjectPublicKeyInfo
- RSASSA-PSS algorithm
- Input is hashed with SHA-256
- MGF function is MGF-1, also using the same hash as the message
- Salt length is the size of the hash
If later we decide we really wanted a different MGF function or salt length in TLS, we'll define a different code point and advertise that separately.
This API should match. It doesn't necessarily need to use the same code points and names (rsa_pss_rsae_sha256 vs rsa_pss_pss_sha256 is irrelevant to this API), but the bundling vs decomposition lesson holds universally.
> But in any case we'd have to continue supporting the old "supportedHashes" parameter in the API - breaking backwards compatibility unless absolutely required would be quite bad.
Agreed. Compatibility should be smooth enough. Since we wouldn't be able to encode algorithms in the existing SignRequest.hash, I would suggest the compatibility story be:
Chrome:
If the extension includes CertificateInfo.supportedAlgorithms, Chrome ignores CertificateInfo.supportedHashes. When it asks to sign something, it will fill in SignRequest.algorithm and leave SignRequest.hash empty (it doesn't work in general anyway). Including supportedAlgorithms in CertificateInfo means the extension promises it understands SignRequest.algorithm.
Otherwise, we use supportedHashes and fill in SignRequst.hash as before. We leave SignRequest.algorithm empty (or fill it in if we feel like it) since it's not terrible useful.
Extension:
If you only support new Chrome, just fill in CertificateInfo.supportedAlgorithms, read SignRequest.algorithm, and forget the whole hashes business.
If you want to support both old and new Chrome, fill in both CertificateInfo.supportedAlgorithms and CertificateInfo.supportedHashes. When processing a SignRequest, first check SignRequest.algorithm. Use that if present. Otherwise, check SignRequest.hash.
WDYT?
> RE pre-hashed vs. unhashed
> [...]
> But I'll double-check this question with our third-party contact that specializes on smart card middleware.
I think either's fine. I think the main considerations are:
- If we use unhashed and the card expects pre-hashed (probably common), the extension needs extra code to hash it itself. But we do provide WebCrypto, so it shouldn't be too bad.
- Algorithms like Ed25519 only support unhashed inputs.
- If we switch to unhashed now, we can integrate it into the supportedAlgorithms vs supportedHashes compatibility story, rather than adding yet another mode in future.
- But switching means more work to do.
(Any news from the contact?)
> I think there's one more question - should the PSS salt be passed through the API?
> But we should double-check for any protocol that we could care about and that would require using a specific salt.
The salt itself should not be passed in. PSS does not work that way. :-)
But note that since the algorithm ID should encompass all parameters, we do require the salt length equal the hash length. TLS 1.3 and QUIC both do this and, now that that precedent is set, hopefully the IETF still stick with it going forward. Having that be a parameter was a mistake.
That said, we should only consider TLS's usage here. These keys should not be used outside TLS anyway. It is not safe to reuse keys between different protocols unless the protocols were expressly designed to support this. It's especially dangerous to do this with TLS keys because TLS 1.2 didn't include a context string. TLS 1.3 had to go out of its way to ensure domain separation in spite of that.
,
Jan 17
(5 days ago)
|
|||
►
Sign in to add a comment |
|||
Comment 1 by davidben@chromium.org
, Dec 5 2017