New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 967 users
Status: Assigned
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Bug

issue 379588

  • Only users with EditIssue permission may comment.

Show other hotlists

Hotlists containing this issue:

Sign in to add a comment
localStorage bug allows sites to fill up hard disk / crash Chrome
Reported by, Feb 28 2013 Back to list
Chrome Version       : 25.0.1364.99
URLs (if applicable) :
Other browsers tested:
  Safari 6: Fail
  Firefox 18: Pass
  IE 10: Fail

What steps will reproduce the problem?
1. Visit
2. Chrome crashes around 2GB.
3. Or, even if Chrome didn't crash, it's still really bad that sites can fill up your hard disk.

What is the expected result?

The spec ( suggests this:

"User agents should guard against sites storing data under the origins other affiliated sites, e.g. storing up to the limit in,,, etc, circumventing the main storage limit. A mostly arbitrary limit of five megabytes per origin is recommended."

What happens instead?

A single domain is allowed to fill up the user's hard disk / crash Chrome by using many subdomains.

Comment 1 Deleted
Comment 2 Deleted
Comment 3 Deleted
Comment 4 Deleted
Comment 5 Deleted
Comment 6 Deleted
Comment 7 Deleted
Comment 8 Deleted
Comment 9 by Deleted ...@, Feb 28 2013
I read a suggestion that the user should be prompted to confirm the use of local storage in the case where the host had never been visited affirmatively (direct input or clicking a link).

That seems like a good policy given that .tk-domains are available for free.
Comment 10 by, Feb 28 2013
I think Firefox's approach (as recommended by the spec) is the most reasonable. Simply, give all * origins a single quota to share.
Don't think we should prompt for local storage use at all, but definitely put a cap on this. Cheers.
Not only should this be fixed - but users should be able to specify exactly how much space they're willing to allow a website to use on their own machine. For web applications where caching data makes lots of sense - then it should be up to me to determine exactly how much space I'm willing to reserve for that cache.
Comment 13 Deleted
Comment 14 by, Feb 28 2013
Labels: -Area-Undefined Area-WebKit WebKit-Storage OS-All
Status: Untriaged
There is a SHOULD recommendation in the HTML specification suggesting that UAs guard against this behavior. Firefox seems to implement this, we do not.
Comment 15 by, Feb 28 2013
One option would be to hook DOM storage up to the quota API's backend, and treating it as temporary quota.

I vaguely remember Kinuko having good reasons for not doing that, however.
Comment 16 Deleted
Putting the whole quota on * will make localStorage incredibly unstable with the likes of github-pages or other types of hosting where different users have different subdomains. How about allowing a certain high number of different subdomains, but low enough that you can't fill up the disk? 100 subdomains, in a fifo queue?
Why not set a global maximum, the same way that Internet Exporer handles their cache (Temporary Internet Files).

Ironically Microsoft don't seem to apply the limit in their own implementation of localStorage.
#17 is the unstability for subdomains a problem in Firefox? If not, it probably wouldn't in Chrome either.
Comment 20 Deleted
Comment 21 Deleted
Comment 22 Deleted
Comment 23 Deleted
Comment 24 by, Feb 28 2013
Labels: Restrict-AddIssueComment-Commit
Thanks for the feedback, everyone. I think we have a good handle on the scope of the problem, and some ideas for possible solutions.

In the interests of keeping this thread focused on a technical solution to the issue, I'm closing comments for non-committers. Please do star the bug if you'd like to follow along.
Comment 25 Deleted
mkwst et al.: this is similar enough to the problem of subdomains possibly setting an infinite number of cookies that you might want the solution here to be consistent with how we addressed this problem there.  (I don't recall the precise behavior offhand anymore, but I believe we do a simple quota based on eTLD+1, and garbage collect once we get too far over the quota.)  It would be nice to have cookies and local storage not behaving *too* radically differently.

Note that using the eTLD list leaves open the possibility for sites that want to treat subdomains as distinct sites -- e.g. -- making that clear (although this would require additional implementation).  Contact pamg or me if you're not familiar with the discussions we've had with the Mozilla folks on this topic over time.
Status: Assigned
I agree with pkasting in #26: we should solve this in the same way we solve cookie explosion attacks.

Note that no solution, including Firefox' (I think they use the eTLD + 1 policy for this, right?), can fully solve the problem: the attacker could just register a lot of eTLD + 1 attack sites and do the same thing as with subdomains. It would cost more, but if the attacker was determined to perform this low impact attack, it would still work.

Mike, can I just give this one to you? Feel free to farm it out to someone else if there is someone better. Although we are no longer tagging it as a security bug, I feel a duty to keep it moving anyway. :)
Kinuko pointed out that there are two things going bad.

1) Chrome browser process memory usage just goes up and never down as more local storage areas (one-per-origin) are filled to their 5M limit in a browsing session.

2) Disk usage goes up and never down because local storage never expires or is evicted.

There's a TODO in there related to the first.

void DomStorageNamespace::CloseStorageArea(DomStorageArea* area) {
  AreaHolder* holder = GetAreaHolder(area->origin());
  DCHECK_EQ(holder->area_.get(), area);
  // TODO(michaeln): Clean up areas that aren't needed in memory anymore.
  // The in-process-webkit based impl didn't do this either, but would be nice.

... when open_count_ is zero, there's no document anywhere in the system that has the area open. They're not dropped from memory now so a navigation from pageA to pageB on the same origin will not incur the cost priming the mem cache when getting to page B. But they're not dropped from memory ever which is part of the problem.

There's the DomStorageContext::PurgeMemory() function which is also related. Calling that on a timer would defend against rampant memory use in practice. That's not really a good answer because it will aggressively drop stuff from the cache, even if pages are open which utilize the storage area), just pointing out this related logic.

Some kind of periodic purge areas that aren't open and haven't been accessed in few seconds/minutes could help in the real world, but not in the malicious filldisk case.

I wonder if we can defend against the malicious filldisk case by cutting things off upstream in renderer processes before many of the malicious requests to open/fill localstorage even makes it to the browser process. Some generous limit on the number of areas any renderer can have open, if a renderer tries to exceed that number, kill that process.
Another idea about not overflowing memory, an upper limit on the number of areas we held in memory might help w/o regard for how large an area really is. Upon opening the <limit>+1, purge one of them. Say 100 areas max?
As michael writes probably we should take several shots, like:
1) take a (temporary) protective action to avoid too aggressive localStorage usage from a renderer, which could be triggered with a reasonable threshold
2) purge memory for areas that are not used, or do so more aggressively when the usage is getting high, and
3) have proper Quota-like mechanism (possibly with GC) for managing the total amount of on-disk storage.

Fixing 1) would be effective for this particular case, but we'll eventually fix all these three (or more).
Hey Chris, I'm probably the wrong person to actually solve the problem in the ways Peter and others have outlined. Michael and Kinuko actually know things about the DOM storage architecture. I can take a look at the temporary solution Michael outlined, however.
I just looked at, too funny the cats and the music, especially the music:)

Anyway, the malicious hackery doesn't open lots of areas at any one time, it fills one area at a time.
That's kind of what I was afraid and the reason I was going to make the first shot with browser-side storage size.  Mike, let me see if I can make some patch proposal for the particular part in Tokyo time and share what I have with you when you're up.
Project Member Comment 35 by, Mar 10 2013
Labels: -Area-WebKit -WebKit-Storage Cr-Content Cr-Content-Storage
Comment 36 by, Mar 13 2013
Labels: -Restrict-AddIssueComment-Commit Restrict-AddIssueComment-EditIssue
Project Member Comment 37 by, Apr 5 2013
Labels: -Cr-Content Cr-Blink
Project Member Comment 38 by, Apr 6 2013
Labels: -Cr-Content-Storage Cr-Blink-Storage
just doing some reassinging, i'd like to write code to delete nonspecial site data that hasn't been accessed in 6 months or so
Labels: -Cr-Blink-Storage Cr-Blink-Storage-DOMStorage
Comment 43 by, Aug 12 2015
Is that issue actively worked on ? 

I found it is still very easy to crash chrome because of it 

Nobody is actively working on this atm. The change referenced in #39 should have addressed the crashing from oom issue? Do you have a repro case, maybe its crashing in a different way?
Comments on include the suggestion that quota be managed at an eTLD+1 level, i.e. "", "" etc. share *quot*, even though they don't share *storage*

Not sure how feasible that is, and it would allow to "starve" with no ability for to reclaim it.

 Issue 82885  has been merged into this issue.
Labels: -Cr-Blink Cr-Blink-Storage-Quota
Blocking: 379588
Comment 50 by, Jan 23 2017
@jsbell - I just noticed this is currently the second highest starred bug on blink right now. Should we find someone to own fixing this in Q1/Q2?
It's a little unclear whether lots of stars means "this is a bit silly, it should probably fixed" or if it's actually being exploited and causing pain for users. Is there evidence of the latter?

Note that site engagement (implemented in the time since comment 31) allows us to evict all these subdomains first if we run out of space, but I don't know if it's hooked up to site storage in a way that mitigates this issue.

Seems the OOM crash is no longer a problem, though [1].

> It's a little unclear whether  ...

That's a good question, I think it's up to the storage team to try to answer it.  If you believe the problem has been mitigated such that what remains is much less important, then I'd call this bug 'Fixed' and file a new one for the remaining issues.  If the star-count climbs a lot on the new bug, then that would suggest you might have mis-interpreting what developers are saying :-)
jsbell@, what are your thoughts on this? Does this seem like what remains is not high priority? Did you want to take Rick's suggestion in c#52?
Given that we're close to shipping a new localStorage implementation, let me punt it over to mek@ :)

We currently don't impose quota (and hence LRU eviction) on localStorage, but could (see #15); whether we keep it per-origin or make it eTLD+1 is a nicety. The memory concerns in #29 are the big concern we should make sure are addressed in the rewrite - cap the memory used when an arbitrary number origins are producing data. The new implementation is also where we could introduce per-origin LRU/eviction with or without quota integration.

Thanks Josh! As part of the predictability program, we have an OKR to resolve or have a plan in place (with a NextAction date) for top-starred bugs, and this is the second most starred bug in Blink. mek@, could you please update with the current plan for this issue?
Sign in to add a comment