Security: HSTS Bypass via flooding of the HSTS policy file
Reported by
a3135...@gmail.com,
Mar 8 2017
|
|||||||||||||||||||||||||||||
Issue descriptionVULNERABILITY DETAILS When TransportSecurity file is very large,HSTS doesn't work for protecting the site that is first loaded by chrome. For example, the user's homepage and clicking a URL in other application(chrome is the default browser). A patient attacker can easily make the victim's TransportSecurity file large by letting the victim visit lots of different HSTS enabled sites. I think this issue is caused asynchronous file IO. In addition, is the TransportSecurity file's size unlimited a bug? VERSION Chrome Version: [56.0.2924.87] Operating System: Only tested on Windows 10、Windows8.1,but maybe all platforms are influenced REPRODUCTION CASE I tested on Windows 10、Windows8.1. 1.make the TransportSecurity file large(200 MB is enough for my computer with SDD harddisk) In order to achieve this, you can create a TransportSecurity file manually (fast) or vist my poc website https://ascii0x03.cn/webworker_hsts.html(patience is needed). 2.make sure the TransportSecurity file is not in RAM Maybe logout or restart system is helpful. I think sometimes the system cache has an impact on the attack. 3.while starting chrome, open a HSTS enabled site there are two ways: (1)Start chrome with a homepage that is a hsts enabled site. (2)Click a HSTS enabled site URL that is without "https://" from other application(chrome is default browser). Here are some confusing results I got. clicked from microsoft word: domain attack result www.baidu.com ok www.google.com no (influenced by preload?) www.paypal.com ok www.alipay.com ok (not in preload?) http://www.paypal.com ok clicked from QQ : domain attack result www.baidu.com ok www.google.com no www.alipay.com ok www.paypal.com no (WHY?) NOTE:make sure the TransportSecurity file is not in RAM when start chrome. I used wireshark saw http request to hsts enabled sites by this way.
,
Mar 10 2017
Lucas, can you have a look at this one? Sleevi noted that it's plausible because our TransportSecurityState::Persister isn't very optimized.
,
Mar 10 2017
Today's TSS::Persister just uses a JSON-backed pref-store, which means its going to be gated by disk load times. There's no intrinsic reason for this, other than the prefs file was the rage for things like this. I don't think we'll find a good architectual solution here using that backend, because we don't facilitate streaming decodes of JSON. Switching to something like LevelDB or SQLite may be appropriate - I'm CC'ing shess@ to the extent of his advice on the 'recommended storage system' for something like this, as his guidance is extremely useful. I'm not sure how other UAs are backing their dynamic HSTS/HPKP pins, but wouldn't be surprised if there's possibly similar inefficiencies. And cc estark@ as FYI for all things Expect-*
,
Mar 10 2017
,
Mar 10 2017
,
Mar 11 2017
,
Mar 11 2017
,
Mar 12 2017
I know that Firefox stores HSTS/HPKP pins in SiteSecurityServiceState.txt text file. And the Firefox file only holds the first 1024 sites visited with HSTS headers to avoid long disk load time. Besides, it uses a visiting frequency score to make the "flooding attack" difficult. It is initialized at 0 the first time a domain is visited and is incremented by one (and only by one, regardless of the days that have passed between visits) for each subsequent day that the website is visited, taking as a reference the current system date and time in contrast to the value stored. The attacker must be patient for many days! Reference Paper:http://link.springer.com/chapter/10.1007/978-3-319-48965-0_12 I don't think it's a nice way, but it is better than chrome's. By the way, I don't understand why chrome must culculate the SHA hash. How is an attacker able to get the file?
,
Mar 13 2017
Waaaaait... are you saying Firefox caches HSTS entries by frequency of use? That sounds awfully unsafe, especially if only the 1024 most frequent are saved. I checked against the spec, which does not seem to be explicit on whether the client MUST store the Known HSTS Host for the lifetime of the max-age if the site is always sending an HSTS header (section 6.1.1 doesn't use the word MUST [1], and section 8.1.1 only talks about what happens if the expiry date has passed), but does clearly state what to do if there is a dynamic max-age stored and then the site doesn't send an HSTS header [3]: > If a UA receives HTTP responses from a Known HSTS Host over a secure > channel but the responses are missing the STS header field, the UA > MUST continue to treat the host as a Known HSTS Host until the > max-age value for the knowledge of that Known HSTS Host is reached. [1] https://tools.ietf.org/html/rfc6797#section-6.1.1 [2] https://tools.ietf.org/html/rfc6797#section-8.1.1 [3] https://tools.ietf.org/html/rfc6797#section-8.6 Unfortunately, I don't (directly) have access to that paper.
,
Mar 13 2017
> Unfortunately, I don't (directly) have access to that paper. (Never mind, I found the magic login link for Googlers to access Springer papers.)
,
Mar 13 2017
If I understand HSTS right, it's kind of like a cookie? Like it's a little piece of info which has to be served by the target itself, and carries forward with an expire time? Preferences is a terrible way to store things. It is way past the point where other people's poor decisions are likely to mess up your local priorities. The OP mentioned 200MB - that's just challenging. SQLite and leveldb may-or-may-not be great choices, if they result in disk accesses gating page fetches. I think over time we've found that disk I/O is surprisingly expensive, often more expensive than network I/O. The most similar thing I can think of offhand is safe-browsing malware/phishing database, which is ~13MB (but was more like 5MB back when these decisions were being made). Initially, bare lookups by key were used, but those were far too slow. So an in-memory bloom filter was introduced, with hits falling back to database lookups to verify, and those were too slow, so it ended up with the bloom filter followed by a server ping. I _think_ the page fetch proceeds in parallel, but use is gated on the safe-browsing results. [ObDisclosure: We've evolved from spinning disks being dominant to SSDs being dominant, but AFAICT in the fleet SSDs are often differently slow. You can't rely on them having reliable RAM-but-slower access, I think.] Even a bloom filter or prefix set (filter based on a rice encoding variant) probably won't help for 200MB attacks, though, unless you can use some interesting encoding or filtering system. Safe browsing stored 32-bit hashes with about 32 bits of metadata apiece, which left the filter around 1/4 the raw database size, but 50MB or even 20MB is really a lot of memory for this! Though I suppose if the attacks don't work, nobody will employ them, so maybe you don't have to worry about that kind of edge case. There may be other safe-browsing-like possibilities, though. Since the server itself has to serve the tidbit, there's perhaps a limit to how many TLDs and/or IPs can be involved. So perhaps there could be a multi-level filter, where one level says "This is a high-collision space", so you can segregate the high-cost queries on disk from the low-cost queries in memory. Or perhaps this could all feed into the safe-browsing machinery to put domains in a penalty box. Not to deny them, but to confine them to a slow path to protect the fast-path domains.
,
Mar 13 2017
> Since the server itself has to serve the tidbit, there's perhaps a limit to how many TLDs and/or IPs can be involved. The solution that other browsers used for Issue 178980 (localStorage flooding using subdomains) is to apply a resource cap per eTLD+1 (#bytes on that bug, but here it would presumably be # of entries). I wouldn't have strong objections to such an approach, but I believe our hash-based storage format might not make it easy to look for all the *subdomains* of a given eTLD+1?
,
Mar 13 2017
@Comment 12: Right, but it's trivial to bypass those mechanisms with a pull request, so I don't know how much we should rely on them.
,
Mar 14 2017
@Comment 9: Hah,copyright sometimes brings me trouble too. :) Only the 1024 most frequent hsts hosts are saved in Firefox really sounds awful, however, in general, I don't think most users have visited 1024 hsts hosts. And if I have visited paypal.com 30 days, the attacker need at least 30 days to flood hsts header. It's not easy.
,
Mar 14 2017
Limiting the number of hsts domain for per eTLD+1 sounds a good solution. If there're lots of hsts subdomains needed, the right way is using "includeSubDomains" directive. There are two questions: 1. as mentioned in issue 178980 comment 45, it would allow me.github.io to "starve" you.github.io with no ability for you.github.io. https://bugs.chromium.org/p/chromium/issues/detail?id=178980#c45 2. Does the solution open a another gate for attackers to sniff user's browser history? (assume we limit 100 entries for bank.com, the attacker can set 99 entries for *.bank.com, leaking the information that the user has visited *.bank.com in expire time.) Maybe the questions above are negligible.
,
Mar 28 2017
lgarron: Uh oh! This issue still open and hasn't been updated in the last 14 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers? If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one? If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started. Thanks for your time! To disable nags, add the Disable-Nags label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Apr 5 2017
Friendly sheriff ping on this one. :)
,
Apr 10 2017
Can we apply cookie-split-loading? Cookie IO is more optimized. https://www.chromium.org/developers/design-documents/cookie-split-loading
,
Apr 11 2017
lgarron: Uh oh! This issue still open and hasn't been updated in the last 28 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers? If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one? If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started. Thanks for your time! To disable nags, add the Disable-Nags label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Apr 20 2017
,
Apr 21 2017
I am sorry that I'll share some details of this bug on a security conference by a poster, on May 22nd, 2017. I think probably it's hard to do such an attack in reality. Sorry for leving just one month to fix this bug before it is public.
,
Jun 6 2017
,
Jul 26 2017
,
Sep 6 2017
,
Oct 18 2017
,
Nov 15 2017
a3135134@, have you shared the details on a conference? I guess we can remove view restrictions then? Lucas, could you please either post an update or re-assign this if you aren't working on this anymore? Your last comment is 8 months old.
,
Nov 21 2017
Yes. But it was not as detailed as this issue report. Probably you can remove view restrictions to help fix this bug faster.
,
Dec 1 2017
,
Dec 7 2017
,
Jan 25 2018
,
Feb 20 2018
I don't think we have any inexpensive solution for the issue identified here. The issue was made public last year and does not need view-restrictions.
,
May 1 2018
We have not made any progress here, but the upcoming changes around the Network Service may have an impact here.
,
May 30 2018
,
Jun 5 2018
Perhaps we could limit the # of HSTS entries any given eTLD+1 can store? This would also be a defense against HSTS super-cookies, if we could make the limit low enough. "32 explicitly-named subdomains should be enough for anybody"; past that, sites should use includeSubDomains? Would that work? Or am I crazy again. Any interest in picking this bug up?
,
Jun 6 2018
If there is some consensus on the best way forward I can take this. If we go with the eTLD+1 option we'll have to change the JSON format because it's not possible to extract that information with the current format. //net OWNERS haven't been very keen on CLs that change the JSON format in the past though. But if we're going to change the format for this maybe we could clean up the format and address some of the other issues at the same time?
,
Jun 6 2018
I'll defer to net OWNERS, agl, and estark. I think a bypass of HSTS is worse than the loss of a weak defense against a forensic attacker that the current format provides. (agl devised the hostname hashing scheme to avoid leaking browsing history to a forensic attacker, if I recall correctly.) It's a bit of a micro-optimization, but I can also imagine a much more compact format than JSON.
,
Jun 7 2018
Using public suffix + 1 as a storage restriction mitigation sadly falls apart, especially when it's easy to get added to public suffix list. That is, a single pull request for "*.evil.attacker.example.com" to the PSL means that every subdomain of .evil.attacker.example.com will constitute a public suffix, and they can just explore 32-subdomains-at-a-time. So some of the fundamental challenges: 1) Is there a more efficient storage format that can allow quicker loading 2) How do we limit the effective size a) No limit b) Limit by some variation of domain c) Limit by number of entries My understanding is that Mozilla chose 2.c) and chose the format in accordance with that. We don't have an overall cookie storage limit (AFAIK), just a per-domain limit, and while the format (SQLite) doesn't block loading, it's allowed to grow.
,
Jun 8 2018
My understanding is that we have both per-domain limits for cookies (kDomainMaxCookies) and an overall limit (kMaxCookies) in net/cookies/cookie_monster.h.
,
Jun 11 2018
I'm glad to see this old bug was picked up again. The summary in #comment 37 is great. I want to say: (1)As I mentioned in #comment 18, there is an efficient loading way. (2)HSTS is a security mechenism. If security is more important than efficient, we should delay browser's startup when loading security config files. If we don't delay the startup, there will be always risks. (3)I don't think md5 format now is good. And palmer's suggestion is more reasonable. Here's the link of my short paper. https://www.ieee-security.org/TC/SP2017/poster-abstracts/IEEE-SP17_Posters_paper_12.pdf
,
Jul 25
,
Aug 7
Ryan, would you mind owning this?
,
Aug 7
Sorry, I'm not a good person to own this.
,
Aug 9
,
Aug 23
martijnc@ -- per #c35, do you want to own this?
,
Aug 23
nharper@ said he's interested so assigning it to him for now.
,
Sep 5
,
Oct 17
,
Dec 5
|
|||||||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||||||
Comment 1 by elawrence@chromium.org
, Mar 8 2017Summary: Security: HSTS Bypass via flooding of the HSTS policy file (was: Security: HSTS Bypass )