New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 699461 link

Starred by 3 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Linux , Android , Windows , Chrome , Mac , Fuchsia
Pri: 1
Type: Bug-Security


Participants' hotlists:
Hotlist-1

Show other hotlists

Other hotlists containing this issue:
poc.css


Sign in to add a comment

Security: HSTS Bypass via flooding of the HSTS policy file

Reported by a3135...@gmail.com, Mar 8 2017

Issue description

VULNERABILITY DETAILS
When TransportSecurity file is very large,HSTS doesn't work for protecting the site that is first loaded by chrome.
For example, the user's homepage and clicking a URL in other application(chrome is the default browser). 
A patient attacker can easily make the victim's TransportSecurity file large by letting the victim visit lots of different HSTS enabled sites. 

I think this issue is caused asynchronous file IO.

In addition, is the TransportSecurity file's size unlimited a bug?

VERSION
Chrome Version: [56.0.2924.87]
Operating System: Only tested on Windows 10、Windows8.1,but maybe all platforms are influenced 

REPRODUCTION CASE
I tested on Windows 10、Windows8.1.
1.make the TransportSecurity file large(200 MB is enough for my computer with SDD harddisk)
In order to achieve this, you can create a TransportSecurity file manually (fast) or vist my poc website https://ascii0x03.cn/webworker_hsts.html(patience is needed).

2.make sure the TransportSecurity file is not in RAM
Maybe logout or restart system is helpful. I think sometimes the system cache has an impact on the attack.

3.while starting chrome, open a HSTS enabled site
there are two ways:
(1)Start chrome with a homepage that is a hsts enabled site.
(2)Click a HSTS enabled site URL that is without "https://" from other application(chrome is default browser).
Here are some confusing results I got.
clicked from microsoft word:
domain        attack result
www.baidu.com   ok
www.google.com  no (influenced by preload?)
www.paypal.com  ok
www.alipay.com   ok (not in preload?)
http://www.paypal.com ok

clicked from QQ :
domain        attack result
www.baidu.com ok
www.google.com no
www.alipay.com ok
www.paypal.com no (WHY?)


NOTE:make sure the TransportSecurity file is not in RAM when start chrome.
I used wireshark saw http request to hsts enabled sites by this way. 

 
111.png
186 KB View Download
222.png
118 KB View Download
Components: Internals>Network>DomainSecurityPolicy
Summary: Security: HSTS Bypass via flooding of the HSTS policy file (was: Security: HSTS Bypass )
Owner: lgar...@chromium.org
Status: Assigned (was: Unconfirmed)
Lucas, can you have a look at this one? Sleevi noted that it's plausible because our TransportSecurityState::Persister isn't very optimized.

Comment 3 by sleevi@google.com, Mar 10 2017

Cc: sh...@chromium.org est...@chromium.org
Today's TSS::Persister just uses a JSON-backed pref-store, which means its going to be gated by disk load times. There's no intrinsic reason for this, other than the prefs file was the rage for things like this.

I don't think we'll find a good architectual solution here using that backend, because we don't facilitate streaming decodes of JSON. Switching to something like LevelDB or SQLite may be appropriate - I'm CC'ing shess@ to the extent of his advice on the 'recommended storage system' for something like this, as his guidance is extremely useful.

I'm not sure how other UAs are backing their dynamic HSTS/HPKP pins, but wouldn't be surprised if there's possibly similar inefficiencies.

And cc estark@ as FYI for all things Expect-*

Comment 4 by sleevi@google.com, Mar 10 2017

Cc: rsleevi@chromium.org
Labels: Security_Severity-Medium Security_Impact-Stable OS-Android OS-Chrome OS-Linux OS-Mac OS-Windows
Project Member

Comment 6 by sheriffbot@chromium.org, Mar 11 2017

Labels: M-57
Project Member

Comment 7 by sheriffbot@chromium.org, Mar 11 2017

Labels: Pri-1

Comment 8 by a3135...@gmail.com, Mar 12 2017

I know that Firefox stores HSTS/HPKP pins in SiteSecurityServiceState.txt text file. And the Firefox file only holds the first 1024 sites visited with HSTS headers to avoid long disk load time. Besides, it uses a visiting frequency score to make the "flooding attack" difficult. It is initialized at 0 the first time a domain is visited and is incremented by one (and only by one, regardless of the days that have passed between visits) for each subsequent day that the website is visited, taking as a reference the current system date and time in contrast to the value stored. The attacker must be patient for many days!

Reference Paper:http://link.springer.com/chapter/10.1007/978-3-319-48965-0_12

I don't think it's a nice way, but it is better than chrome's.
By the way, I don't understand why chrome must culculate the SHA hash. How is an attacker able to get the file?
Waaaaait... are you saying Firefox caches HSTS entries by frequency of use? That sounds awfully unsafe, especially if only the 1024 most frequent are saved.

I checked against the spec, which does not seem to be explicit on whether the client MUST store the Known HSTS Host for the lifetime of the max-age if the site is always sending an HSTS header (section 6.1.1 doesn't use the word MUST [1], and section 8.1.1 only talks about what happens if the expiry date has passed), but does clearly state what to do if there is a dynamic max-age stored and then the site doesn't send an HSTS header [3]:

>    If a UA receives HTTP responses from a Known HSTS Host over a secure
>    channel but the responses are missing the STS header field, the UA
>    MUST continue to treat the host as a Known HSTS Host until the
>    max-age value for the knowledge of that Known HSTS Host is reached.

[1] https://tools.ietf.org/html/rfc6797#section-6.1.1
[2] https://tools.ietf.org/html/rfc6797#section-8.1.1
[3] https://tools.ietf.org/html/rfc6797#section-8.6

Unfortunately, I don't (directly) have access to that paper.
> Unfortunately, I don't (directly) have access to that paper.

(Never mind, I found the magic login link for Googlers to access Springer papers.)

Comment 11 by sh...@chromium.org, Mar 13 2017

If I understand HSTS right, it's kind of like a cookie?  Like it's a little piece of info which has to be served by the target itself, and carries forward with an expire time?

Preferences is a terrible way to store things.  It is way past the point where other people's poor decisions are likely to mess up your local priorities.  The OP mentioned 200MB - that's just challenging.

SQLite and leveldb may-or-may-not be great choices, if they result in disk accesses gating page fetches.  I think over time we've found that disk I/O is surprisingly expensive, often more expensive than network I/O.  The most similar thing I can think of offhand is safe-browsing malware/phishing database, which is ~13MB (but was more like 5MB back when these decisions were being made).  Initially, bare lookups by key were used, but those were far too slow.  So an in-memory bloom filter was introduced, with hits falling back to database lookups to verify, and those were too slow, so it ended up with the bloom filter followed by a server ping.  I _think_ the page fetch proceeds in parallel, but use is gated on the safe-browsing results.

[ObDisclosure: We've evolved from spinning disks being dominant to SSDs being dominant, but AFAICT in the fleet SSDs are often differently slow.  You can't rely on them having reliable RAM-but-slower access, I think.]

Even a bloom filter or prefix set (filter based on a rice encoding variant) probably won't help for 200MB attacks, though, unless you can use some interesting encoding or filtering system.  Safe browsing stored 32-bit hashes with about 32 bits of metadata apiece, which left the filter around 1/4 the raw database size, but 50MB or even 20MB is really a lot of memory for this!  Though I suppose if the attacks don't work, nobody will employ them, so maybe you don't have to worry about that kind of edge case.

There may be other safe-browsing-like possibilities, though.  Since the server itself has to serve the tidbit, there's perhaps a limit to how many TLDs and/or IPs can be involved.  So perhaps there could be a multi-level filter, where one level says "This is a high-collision space", so you can segregate the high-cost queries on disk from the low-cost queries in memory.

Or perhaps this could all feed into the safe-browsing machinery to put domains in a penalty box.  Not to deny them, but to confine them to a slow path to protect the fast-path domains.
> Since the server itself has to serve the tidbit, there's perhaps a limit to how many TLDs and/or IPs can be involved. 

The solution that other browsers used for Issue 178980 (localStorage flooding using subdomains) is to apply a resource cap per eTLD+1 (#bytes on that bug, but here it would presumably be # of entries). I wouldn't have strong objections to such an approach, but I believe our hash-based storage format might not make it easy to look for all the *subdomains* of a given eTLD+1?
@Comment 12: Right, but it's trivial to bypass those mechanisms with a pull request, so I don't know how much we should rely on them.

Comment 14 by a3135...@gmail.com, Mar 14 2017

@Comment 9:
Hah,copyright sometimes brings me trouble too.  :)
Only the 1024 most frequent hsts hosts are saved in Firefox really sounds awful, however, in general, I don't think most users have visited 1024 hsts hosts. And if I have visited paypal.com 30 days, the attacker need at least 30 days to flood hsts header. It's not easy.

Comment 15 by a3135...@gmail.com, Mar 14 2017

Limiting the number of hsts domain for per eTLD+1 sounds a good solution. If there're lots of hsts subdomains needed, the right way is using "includeSubDomains" directive.

There are two questions:
1. as mentioned in issue 178980 comment 45, it would allow me.github.io to "starve" you.github.io with no ability for you.github.io.
https://bugs.chromium.org/p/chromium/issues/detail?id=178980#c45
2. Does the solution open a another gate for attackers to sniff user's browser history? (assume we limit 100 entries for bank.com, the attacker can set 99 entries  for *.bank.com, leaking the information that the user has visited *.bank.com in expire time.)

Maybe the questions above are negligible. 
Project Member

Comment 16 by sheriffbot@chromium.org, Mar 28 2017

lgarron: Uh oh! This issue still open and hasn't been updated in the last 14 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers?

If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one?

If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started.

Thanks for your time! To disable nags, add the Disable-Nags label.

For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
Friendly sheriff ping on this one. :)

Comment 18 by a3135...@gmail.com, Apr 10 2017

Can we apply cookie-split-loading? Cookie IO is more optimized.

https://www.chromium.org/developers/design-documents/cookie-split-loading
Project Member

Comment 19 by sheriffbot@chromium.org, Apr 11 2017

lgarron: Uh oh! This issue still open and hasn't been updated in the last 28 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers?

If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one?

If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started.

Thanks for your time! To disable nags, add the Disable-Nags label.

For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
Project Member

Comment 20 by sheriffbot@chromium.org, Apr 20 2017

Labels: -M-57 M-58

Comment 21 by a3135...@gmail.com, Apr 21 2017

I am sorry that I'll share some details of this bug on a security conference by a poster, on May 22nd, 2017. I think probably it's hard to do such an attack in reality. Sorry for leving just one month to fix this bug before it is public. 
Project Member

Comment 22 by sheriffbot@chromium.org, Jun 6 2017

Labels: -M-58 M-59
Project Member

Comment 23 by sheriffbot@chromium.org, Jul 26 2017

Labels: -M-59 M-60
Project Member

Comment 24 by sheriffbot@chromium.org, Sep 6 2017

Labels: -M-60 M-61
Project Member

Comment 25 by sheriffbot@chromium.org, Oct 18 2017

Labels: -M-61 M-62
a3135134@, have you shared the details on a conference? I guess we can remove view restrictions then?

Lucas, could you please either post an update or re-assign this if you aren't working on this anymore? Your last comment is 8 months old.

Comment 27 by a3135...@gmail.com, Nov 21 2017

Yes. But it was not as detailed as this issue report. Probably you can remove view restrictions to help fix this bug faster. 
Owner: elawrence@chromium.org
Project Member

Comment 29 by sheriffbot@chromium.org, Dec 7 2017

Labels: -M-62 M-63
Project Member

Comment 30 by sheriffbot@chromium.org, Jan 25 2018

Labels: -M-63 M-64
Cc: nhar...@chromium.org
Labels: -Restrict-View-SecurityTeam -M-64 M-66 allpublic
I don't think we have any inexpensive solution for the issue identified here.

The issue was made public last year and does not need view-restrictions.
Owner: ----
Status: Available (was: Assigned)
We have not made any progress here, but the upcoming changes around the Network Service may have an impact here.
Project Member

Comment 33 by sheriffbot@chromium.org, May 30 2018

Labels: -M-66 M-67
Labels: OS-Fuchsia
Perhaps we could limit the # of HSTS entries any given eTLD+1 can store? This would also be a defense against HSTS super-cookies, if we could make the limit low enough. "32 explicitly-named subdomains should be enough for anybody"; past that, sites should use includeSubDomains? Would that work? Or am I crazy again.

Any interest in picking this bug up?
If there is some consensus on the best way forward I can take this.

If we go with the eTLD+1 option we'll have to change the JSON format because it's not possible to extract that information with the current format. //net OWNERS haven't been very keen on CLs that change the JSON format in the past though. But if we're going to change the format for this maybe we could clean up the format and address some of the other issues at the same time?
Cc: agl@chromium.org
I'll defer to net OWNERS, agl, and estark. I think a bypass of HSTS is worse than the loss of a weak defense against a forensic attacker that the current format provides. (agl devised the hostname hashing scheme to avoid leaking browsing history to a forensic attacker, if I recall correctly.)

It's a bit of a micro-optimization, but I can also imagine a much more compact format than JSON.
Using public suffix + 1 as a storage restriction mitigation sadly falls apart, especially when it's easy to get added to public suffix list. That is, a single pull request for "*.evil.attacker.example.com" to the PSL means that every subdomain of .evil.attacker.example.com will constitute a public suffix, and they can just explore 32-subdomains-at-a-time.

So some of the fundamental challenges:
1) Is there a more efficient storage format that can allow quicker loading
2) How do we limit the effective size
  a) No limit
  b) Limit by some variation of domain
  c) Limit by number of entries

My understanding is that Mozilla chose 2.c) and chose the format in accordance with that. We don't have an overall cookie storage limit (AFAIK), just a per-domain limit, and while the format (SQLite) doesn't block loading, it's allowed to grow.
My understanding is that we have both per-domain limits for cookies (kDomainMaxCookies) and an overall limit (kMaxCookies) in net/cookies/cookie_monster.h.

Comment 39 by a3135...@gmail.com, Jun 11 2018

I'm glad to see this old bug was picked up again.
The summary in #comment 37 is great. I want to say: 
(1)As I mentioned in #comment 18, there is an efficient loading way.
(2)HSTS is a security mechenism. If security is more important than efficient, we should delay browser's startup when loading security config files. If we don't delay the startup, there will be always risks.
(3)I don't think md5 format now is good. And palmer's suggestion is more reasonable. 

Here's the link of my short paper.
https://www.ieee-security.org/TC/SP2017/poster-abstracts/IEEE-SP17_Posters_paper_12.pdf


Project Member

Comment 40 by sheriffbot@chromium.org, Jul 25

Labels: -M-67 Target-68 M-68
Cc: -rsleevi@chromium.org
Owner: rsleevi@chromium.org
Status: Assigned (was: Available)
Ryan, would you mind owning this?
Cc: rsleevi@chromium.org
Owner: ----
Status: Untriaged (was: Assigned)
Sorry, I'm not a good person to own this.
Status: Available (was: Untriaged)
Owner: marti...@chromium.org
Status: Assigned (was: Available)
martijnc@ -- per #c35, do you want to own this?
Cc: marti...@chromium.org
Owner: nhar...@chromium.org
nharper@ said he's interested so assigning it to him for now.
Project Member

Comment 46 by sheriffbot@chromium.org, Sep 5

Labels: -M-68 M-69 Target-69
Project Member

Comment 47 by sheriffbot@chromium.org, Oct 17

Labels: -M-69 Target-70 M-70
Project Member

Comment 48 by sheriffbot@chromium.org, Dec 5

Labels: -M-70 Target-71 M-71

Sign in to add a comment