New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 468718 link

Starred by 9 users

Issue metadata

Status: Duplicate
Merged: issue 375297
Closed: Sep 2016
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Bug-Regression

Sign in to add a comment

Blob Objects are not getting cleared once the blob object is de-referenced

Reported by, Mar 19 2015

Issue description

UserAgent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.50 Safari/537.36

Steps to reproduce the problem:
1. Open the attached HTML file.
2. click on "click to create blob object" button to create a blob object .
3. click on "click to delete blob object" button to delete the blob object.

What is the expected behavior?
blob object should be removed in "chrome://blob-internals" when blob object is de-referenced.

What went wrong?
blob object still persist in "chrome://blob-internals/" and the "Refcount" value is 1 after blob object is de-referenced.

Did this work before? Yes 39

Chrome version: 42.0.2311.50  Channel: beta
OS Version: 6.1 (Windows 7, Windows Server 2008 R2)
Flash Version: Shockwave Flash 17.0 r0

In earlier versions of chrome blob object gets cleared immediately after the de-referencing of the blob object.
In the current version blob object gets cleared once the page reloads or after a long time.
877 bytes View Download
Labels: Cr-Blink
Labels: Needs-Bisect M-43
Status: Untriaged
Tested the issue on windows 7 using chrome latest chrome version 41.0.2272.101 and
canary 43.0.2338.2 with below steps

1. Opened the attached HTML file.
2. click on "click to create blob object" button to create a blob object .
3. click on "click to delete blob object" button to delete the blob object.
4. Go to Chrome://blob-internals
5. Reference id is still present in the Chrome://blob-internals

Able to reproduce the issue on windows.working on bisect will provide the bisect
information soon with other OS behavior.
Labels: -Needs-Bisect Cr-Internals
Status: Assigned
Able to reproduce this issue on ALL OS.
Please find the below bisect information(tried on windows)

Narrow bisect::
Good::41.0.2217.0   (official build 303606)
Bad::41.0.2218.0    (official build 303817)


Blink ChangeLog::


Not able to find the exact suspect from the change log.
But suspect from the omahaproxy change log ::

fmalita@ could you please look into this issue if its related t your change,else feel free to reassign it to appropriate dev person.

Owner: deals with text rendering and is not related to JS/blobs.

The repro test is super finicky for me (takes quite a few create/delete blob cycles to get one to "stick"), so I'm not sure that the bisect range is 100% accurate. @kavvaru can you confirm that the repro is reliable for you?

FWIW I'm not convinced that the expectation of blobs being immediately GC-ed is founded. When forcing an explicit GC (see attached test - requires launching with --js-flags="--expose-gc"), the blob entries do get cleared.

I see a V8 roll in that range ( which may have introduced changes in GC behavior.

Punting to someone more familiar with this area.
996 bytes View Download
CC our memory sheriffs

Comment 6 by, Mar 24 2015

This probably has to do with V8, as the lifetime is handles by the BlobData.cpp object which is handled by V8.

So maybe the V8 Roll changed this behavior.   Is this incorrect behavior?  I guess it would be nice to free blob resources immediately, as they can be connected to rather large data on the browser.

Hey Adam, do you know who I could talk to about this, and if there's a way to mark certain objects as gc-aggressively?
Labels: -OS-Windows OS-All
based on comment #3, changing the label to 'OS-All'

Comment 8 by, Mar 25 2015

I don't see anything in that v8 roll that looks related to GC behavior. fmalita said above that he wasn't able to reliably reproduce the issue, and so I'm inclined to discount the bisect results. And I also agree with his point that expecting Blobs to be freed immediately upon being deref'd by Blink doesn't match my expectation of how this should work.

I think the thing to do on the Blink side is verify that there's actually a bug here (which I'm not at all sure about, given that fmalita was able to kill the blobs by forcing a full GC). If there is, assign back to hpayer on v8 with a more detailed explanation of the expected behavior and how it goes wrong.
Blobs are subject to GC, I don't think thats really a bug but it might be nice if the collector could prioritize their disposal.

The explicit Blob.close() method is supposed to help with this. I think we've still got that behind an experimental API. 

    [RaisesException, CallWith=ExecutionContext, RuntimeEnabled=FileAPIBlobClose] void close();

We may want to promote that?
That seems like a good approach, I'll look at starting to ship that.
That is included in the new FileAPI up for review.  Once that gets approved (see here, this function will be part of the standard.

Thread here:
I can also confirm creating a lot of blobs (>100Mb),
calling close and nulling any references the blob-internals still lists refcount as 1 and the Chrome Task Manager still shows those blobs occupying memory, even tho by closing them I can continue creating new blobs when more than the hard limit of 500Mb have been created previously. 
Labels: Hotlist-Recharge
This issue likely requires triage.  The current issue owner maybe inactive (i.e. hasn't fixed an issue in the last 30 days).  Thanks for helping out!

Components: -Blink Blink>FileAPI
Labels: -Hotlist-Recharge -M-43 m-43 hotlist-recharge
Hello. On Chrome 49, we make repeated calls to the new MediaRecorder API and  construct a blob from the accumulated data chunks similar to thhis link, .  We make repeated calls for 10 second record intervals with 2 seconds between 'stop' and 'start' calls on the mediarecorder object. This produces a steady accumulation of blobs that are still seen in chrome://blob-internals, even after calling 'close' on these blobs.  Chrome continues to accumulate data and not relinquish memory until it hits the 2 GB limit on 32bit chrome.  At that point I can't make any more blobs that are saveable nor even viewable via URL.createObjectUrl method on the blobs.   Thus, blobs need to be garbage collected in a TIMELY manner when used in conjuction with new APIs that produce blobs in rapid succession.

Comment 16 by, Apr 19 2016

hello, I use a websocket to receive jpeg file binary data, but the blob is not free after websocket.onmessage callback.(check chrome://blob-internals/  jpeg blob Refcount: 1 ) 

Is this a chrome bug? or I did not know some rule for free blob?

3.3 KB View Download
I've seen some behavior around keeping the most recently read blob around. I'll be investigating more. just to confirm, you're definitely revoking those URLs?
Do we have an ETA for the fix?
The investigation resulted in an unrelated issue. I can't repro this. The annoying part is that devtools messes with us a little, where it keep references to some requests (at least, the very last request), so sometimes this means that blobs stick around way longer than they normally would.
Just wanted to chime in on this issue as I seemed to be seeing this.

I'm building what is essentially a recording app with blobs from MediaRecorder.

I'm storing these blobs with indexeddb before sending them to the server (acting like a cache to prevent data loss). What I noticed was that after around 45-55 minutes of recording, saving new blobs resulted in the transaction returning an "Uncaught exception in event handler" error (even though the type of the event was success). 

After a lot of googling and experimentation I eventually stumbled upon this issue report, since 45-55 minutes of recording equate roughly 500MB this might have been the issue I was seeing, so it was worth looking into. 

I didn't notice RAM usage getting higher, so still not sure how this could be explained, but I tried the blob.close() method on both the blob that is received from the MediaRecorder events once it's successfully stored in indexeddb and in the logic that sends the data to the server (new Blob()) when it was successfully transferred and saved by the server. This actually resolved the issue and there's no limit to the data size at this point, making recordings of multiple hours long works as expected.

I'm using electron (desktop nodejs app with chromium), so in order to be able to use blob.close() I had to set the experimental features flag to true.

Hopefully this proves helpful to someone

Mergedinto: 375297
Status: Duplicate (was: Assigned)
Marking as duplicate of 500 mb. It sounds like the blobs weren't being garbage collected.
Components: Blink>Storage>FileAPI
Components: -Blink>FileAPI

Sign in to add a comment