New issue
Advanced search Search tips

Issue 688268 link

Starred by 1 user

Issue metadata

Status: Duplicate
Owner: ----
Closed: Feb 2017
Components:
EstimatedDays: ----
NextAction: ----
OS: Mac
Pri: 2
Type: Bug



Sign in to add a comment

CacheStorage.keys() always fails with "operation too large"

Reported by con...@superhuman.com, Feb 3 2017

Issue description

UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36

Steps to reproduce the problem:
1. Fill your cache storage with a large number of entries (I noticed this in production with ~28k)
2. Run cache.keys()
3. It will throw an "operation too large" error.

You can test this here (click the button and wait 30 seconds while it adds 30,000 entries to the cache and then calls cache.keys()): https://rawgit.com/ConradIrwin/d1f30f90cb5e9f77856e8600bac07583/raw/d1221654fa8af9f33faea6eb6aaca5e9a9d62412/storage.html

What is the expected behavior?
It should return an array of the keys in CacheStorage.

What went wrong?
There's no way to iterate over the keys in large caches.

Did this work before? N/A 

Does this work in other browsers? N/A

Chrome version: 56.0.2924.87  Channel: stable
OS Version: OS X 10.12.2
Flash Version: Shockwave Flash 24.0 r0

I'm not sure whether this is a bug in Chrome, or a bug in the spec as it seems that there's no provision for pagination in the .keys() API, and I can imagine the need to put some limits on the number of keys returned as they're fairly heavyweight javascript objects. Filed as https://github.com/w3c/ServiceWorker/issues/1066 for discussion.

 

Comment 1 by bke...@mozilla.com, Feb 3 2017

There is a bug in your tests case.  You are really measuring the time it takes to write 30k requests into the cache.  Each Cache.put() is an async operation and you don't wait for the returned promise to resolve.  So when you start your cache.keys() you are still waiting for all those disk writes to run.

You probably want to do a Promise.all() on the cache.put() promises and then start the cache.keys().  That way you can isolate the time to read vs write.

Comment 2 by bke...@mozilla.com, Feb 3 2017

This isolates the read time from the write time:

function fill(cache) {
  let list = [];
  for (let i = 0; i < 30000; ++i) {
    list.push(cache.put('https://example.com/probably-crash-' + i, new Response('ok')))
  }
  return Promise.all(list);
}

function read(cache) {
  let start = performance.now();
  cache.keys().then(keys => {
    let end = performance.now();
    console.log(end - start);
  })
}

caches.open('foo').then(cache => { fill(cache).then(_ => read(cache) ) })
Thanks for pointing that out! In this case the primary problem is the crash (as this is a very rare event, and async operation, the time is less relevant). 

I've published an updated test here: https://rawgit.com/ConradIrwin/a634359060cbbde0e90a30e5744aa1a1/raw/94c524dcc451f7104b5fdf8999a56cc70db86ff2/storage.html

It still reliably crashes with "Operation too large"
Components: -Blink>Storage Blink>Storage>CacheStorage
"crashes" - actual browser or renderer crash, or just an exception?
Just an exception, sorry for the confusing. It correctly returns

    query_cache_context->callback.Run(CACHE_STORAGE_ERROR_QUERY_TOO_LARGE,
                                      std::unique_ptr<QueryCacheResults>());

Which is surfaced as a javascript exception "Operation too large"
That error is an unfortunate hack to protect the browser from using too much memory. Until we rewrite CacheStorage on the browser side to stream responses back to the renderer, we simply limit the number of responses we process on the browser side so that it doesn't get too big. :[
Mergedinto: 638195
Status: Duplicate (was: Unconfirmed)
Note that I'm referring to the size of the response headers being an issue, we *do* stream the response body bytes.
Thanks! What's the target issue? It's unfortunately hidden :(
Screen Shot 2017-02-03 at 19.24.08.png
495 KB View Download
Same thing as I described in comment 6 basically.

Sign in to add a comment