Memory leak is occured during setting img.src to Data URI
Reported by
sa03031...@gmail.com,
Mar 30 2016
|
||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36 Example URL: Steps to reproduce the problem: 1. Open Windows Task Manager or other tools for memory commit charge checking. 2. Load the attached HTML page. 3. Click 'ok' button on alert box (It is created for checking of page reloading). 4. Monitor commit charge size changes for Chrome processes. What is the expected behavior? The canvas and image will be filled randomly and memory usage will remain relatively constant. What went wrong? Memory usage continues growing. Page is reloaded after available memory is exhausted. Does it occur on multiple sites: N/A Is it a problem with a plugin? No Did this work before? N/A Does this work in other browsers? Yes Chrome version: 49.0.2623.110 Channel: stable OS Version: 5.1 (Windows XP) Flash Version: Shockwave Flash 21.0 r0 This bug looks awfully familiar to this: https://bugs.chromium.org/p/chromium/issues/detail?id=309543
,
Mar 30 2016
The real problem arises when the JPEG pseudo video is transmissed via web-socket and is displayed on the canvas. I think this bug is critical for a lot of projects.
,
Mar 31 2016
,
Apr 5 2016
I suspect this may have to do with how the resource cache works. japhet: Any insight on how data URLs are handled by the resource cache? I am concerned that although decoded image data is tracked and purged by the cache as needed, perhaps there are long-lived copies of the URL strings (used as cache indexes?). In the case of a DataURL, these strings can be problematically large.
,
Apr 5 2016
MemoryCache does use data URLs as keys, and will keep the URL string alive as a result. However, the URL is included in the estimated size of the Resource that is used to determine prune/evict timing and ordering in MemoryCache. I would have thought that should shield us from leaving these urls lying around too long, since we should be biasing toward removing large resources first.
,
Apr 5 2016
I guess we need to investigate more to see if memory usage grows indefinitely or if it stops when the cache limit has been reached. Is there a deterministic cache limit?
,
Apr 5 2016
We set a global capacity for all renderer processes that then spreads the capacity using some heuristics I'm not familiar with. The global limit is set via: https://code.google.com/p/chromium/codesearch#chromium/src/components/web_cache/browser/web_cache_manager.cc&l=39
,
Apr 6 2016
Yeah... this is definitely a real leak. Grows until OOM. @xlai: Please take a look.
,
Apr 8 2016
This is a regression. I had done a local bisecting between Chrome version 48.0.2564.0 (which does not have this bug) and Chrome version 49.0.2623.112 (which has this bug) and find that this bug appears in the following SVN range: http://test-results.appspot.com/revision_range?start=365481&end=365487 I highly suspect that this commit (https://codereview.chromium.org/1428383002) causes the regression. japhet@: Assign the issue to you; please take a look. As a side note, the way I distinguish a bad build and a good build is to open "watch -n 5 free -m" to monitor the memory usage in my Linux machine and record down the "free memory" every 30 seconds after opening the above-given html and clicking "ok". In a good build, the free memory will drop a little bit initially, but then later stays constant; in a bad build, the free memory keeps dropping.
,
Apr 8 2016
I've had a trunk build running for 5-10 minutes. the free memory is varying in a ~200MB range, but it isn't really dropping per se. Should I expect to see a drop in that amount of time if the leak is present on trunk? For that matter, have we verified this is happening in M50? It's possible that oilpan fixed this.
,
Apr 8 2016
Good point. I am pretty sure I was testing in M49 when I looked at this, and I saw the process size go from less than 100MB to beyond 800MB before I killed the tab (this is from memory, so not 100% reliable). Seems things are better in ToT based on your observations. Perhaps re-validate for yourself before closing the issue?
,
Apr 8 2016
japhet@: The bug does not exist in 51.0.2702.0 any more.
,
Jun 1 2016
Moving this nonessential bug to the next milestone. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Jul 14 2016
This issue is Pri-1 but has already been moved once. Lowering the priority and moving to the next milestone. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Oct 5 2016
Closing per Comment #12 |
||||||||
►
Sign in to add a comment |
||||||||
Comment 1 by sa03031...@gmail.com
, Mar 30 2016