FileReader.readAsArrayBuffer() - Reading large files causes memory leak
Reported by
gjerm...@gmail.com,
Dec 16 2016
|
||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 Steps to reproduce the problem: 1. Go to https://jsfiddle.net/andy250/pjt9udeu/ 2. Choose a 10 GB test file 3. Keep an eye on the Memory consumption. It will go through the roof and kill the browser. What is the expected behavior? Memory consumption should stay between 100-200 MB - like IE11 and Firefox does. What went wrong? Chrome has a memory leak (or GC problems). It will consume such several gigabytes of memory, and eventually crash. FileReader.readAsDataURL() also shows the same problems when dealing with large files. Did this work before? N/A Chrome version: 55.0.2883.87 Channel: stable OS Version: 10.0 Flash Version: Shockwave Flash 24.0 r0
,
Dec 16 2016
Tested this on OS X Sierra now - there it works fine in both Safari 9.1.3 and Chrome 54.0.2840.98 (64 bit). So this seems to be a Windows specific issue.
,
Dec 16 2016
Seems to work fine for me on Windows (64bit version) and Ubuntu as well. I also tried backgrounding the tab because I wondered if it was caused by throttled timers in background tabs because you are holding onto the chunk in a setTimeout but it seems to work as well. I find that memory peaks around 600-700MB and then a GC kicks in and reduces the tab to ~60MB. Perhaps you can provide a memory trace of the issue? See https://chromium.googlesource.com/chromium/src/+/master/docs/memory-infra/README.md
,
Dec 19 2016
Using IE/FF in Windows + and also on Chrome on OS X - memory never increases above ~150-200 MB when doing the same operation. Try to open a 10GB file with this updated JSFiddle: https://jsfiddle.net/v0cu3npf/2/ (I increased chunksize to 200 MB). It makes the browser crash after a few seconds after reaching 4 GB of consumed RAM.
,
Dec 19 2016
,
Dec 19 2016
,
Dec 20 2016
,
Dec 20 2016
Memory sheriffs PTAL.
,
May 4 2017
Another demo of this http://jsbin.com/cekequd/quiet. Give the top input a large file & watch the console. Firefox completes in ~500ms for a 900mb file, whereas Chrome chugs along then crashes. Notably, Firefox doesn't offer result in progress events, which may be our downfall here. If we were to offer progress events, we should allocate the whole array buffer before the first progress event, and write bytes to it for each progress event.
,
May 4 2017
Related spec issue https://github.com/w3c/FileAPI/issues/79
,
May 10 2017
As far as I can see, Chrome "deopts" if there's an onprogress listener. At least that makes it easy to put a use-counter on.
,
Sep 25 2017
Looking at the code, in case of DataURLs we already should be not giving partial data URLs (but instead potentially give empty strings, which wouldn't be spec compliant either...) But yeah, should be easy to use-count getting the result attribute before load has finished.
,
Sep 25 2017
Actually it sees jake/the FileAPI spec issue are describing a very different situation from the original bug. Original bug had no progress events and did manual chunking, while jake's code does have a progress event and explicitly holds on to all the partial array buffers, which I'm not sure how that could do anything other than the described behavior. Of course we could make it harder to shoot yourself in the foot by making the mentioned FileAPI change, but not sure there is really anything objectively wrong here.
,
Sep 25 2017
And since I can't actually reproduce the originally reported issue anymore (memory usage stays low, even with several-GB files), marking this one as WontFix. @jake: feel free to file a separate bug for your issue, although I'm not quite convinced that's actually a bug (other than that apparently other browsers don't behave the same).
,
Jun 15 2018
,
Jun 15 2018
|
||||||||||
►
Sign in to add a comment |
||||||||||
Comment 1 by gjerm...@gmail.com
, Dec 16 201610.6 KB
10.6 KB View Download
57.3 KB
57.3 KB View Download