Consider compressing service worker script and code cache on disk |
||
Issue descriptionOn windows devices with spinning media we currently see much slower service worker startup at the tail end. Internal link: https://uma.googleplex.com/p/chrome/timeline_v2?sid=4be2c4fb91d2eebb0e6d688bfceed1e6 To reduce disk IO and pressure on the operating system disk cache we could try compressing service worker script bytes. Currently its stored in simple disk_cache in uncompressed form. In addition to compressing the javascript we should also consider compressing the code cache. Using gzip with default settings on a code cache file I see a 58% reduction in size. (The javascript file saw a 68% reduction.) While that quick test was done with gzip from the command line, I think we would probably want to use snappy in the browser. Its designed to be low overhead, even on slower devices. It may not get the full compression possible with gzip, but should still give significant gains over uncompressed data. Note, it might be reasonable to ask if disk_cache should do this compression at the storage layer. This is somewhat difficult given that disk_cache provides a general API which includes sparse reads and writes in order to support range requests. It would be difficult to enable compression in general while also supporting these features. At the service worker layer, however, we know we don't need range requests, sliced data, or random access. So its an easier fit to add compression to see if we can measure some impact. If this is successful we could then try to roll out compression to other code cache paths. In general none of those paths require out-of-order disk access and they could all be compressed. We could even experiment with compressing at the v8 layer itself so memory cache is holding compressed bytes and saving runtime memory. Anyway, exploring this in the service worker script cache layer might be the easiest place to start with the opportunity to see some measurable wins. +kinuko and +falken for service worker architecture. +pwnall for snappy expertise.
,
Jan 15
In my experience its really hard to measure the impact in local tests. Any repeated operations tend to prime the operating system disk cache. Once the bits are in memory decompressing is almost always going to be neutral or slightly worse. It can have big wins in the cold disk cache case, though. Its just hard to reliably measure that in the lab. We probably need a finch experiment to evaluate it.
,
Jan 15
wanderview@, bashi@: Thank you very much for thinking about this! Strong +1 on using snappy here. Here are some recent benchmarks: https://ci.appveyor.com/project/pwnall/snappy/builds/21477333/job/ep6dj47450a4bb7t https://travis-ci.org/google/snappy/jobs/477071391 https://travis-ci.org/google/snappy/jobs/477071387 I expect snappy to compress JS on the order of 100MB/s even on slower machines. The CPU overhead should be significantly smaller than gzip/zlib. Please let me know if there's anything I can do to help.
,
Jan 16
(6 days ago)
|
||
►
Sign in to add a comment |
||
Comment 1 by bashi@chromium.org
, Jan 14