New issue
Advanced search Search tips

Issue 682188 link

Starred by 1 user

Issue metadata

Status: Duplicate
Merged: issue 375297
Owner:
Closed: Jan 2017
Components:
EstimatedDays: ----
NextAction: ----
OS: Mac
Pri: 2
Type: Bug



Sign in to add a comment

'DataError: Failed to write blobs' when quickly saving large file blobs in small chunks to indexeddb

Reported by j...@zencastr.com, Jan 18 2017

Issue description

UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36

Steps to reproduce the problem:
1. Open the attached test case in v55

What is the expected behavior?
It should run two tests.  Save a 300MB blob as a whole.  Then save the same blob in .5MB chunks.

What went wrong?
During the second test, saving the blob in chunks starts hitting transaction aborted errors: DataError: Failed to write blobs.

In my tests this happens after saving 209715200 bytes.

Did this work before? No 

Does this work in other browsers? Yes

Chrome version: 55.0.2883.95  Channel: stable
OS Version: OS X 10.12.2
Flash Version: Shockwave Flash 24.0 r0

Related issues: 
https://bugs.chromium.org/p/chromium/issues/detail?id=338800#c22
https://bugs.chromium.org/p/chromium/issues/detail?id=682185
 
index.html
5.9 KB View Download

Comment 1 by jsb...@chromium.org, Jan 18 2017

Labels: Needs-Feedback
Owner: dmu...@chromium.org
Status: Assigned (was: Unconfirmed)
Given the numbers (300MB + 300MB) this sounds like a duplicate of https://bugs.chromium.org/p/chromium/issues/detail?id=375297

Can the original reporter try reproducing this in Canary, where the 500MB blob limit has been mitigated?

Comment 2 by jsb...@chromium.org, Jan 18 2017

Components: Blink>Storage>IndexedDB

Comment 3 by j...@zencastr.com, Jan 18 2017

The original test case clears the first 300MB blob from the database before it tries to save it again in chunks.  I have attached a new test case that skips the first un-chunked 300MB blob save.  It still starts aborting at 209715200 bytes.

If there is a 500MB limit, why would it not be hit when saving one large blob but does get hit when saving it in chunks?

The total blob size is 314572800.
The size of chunks written before abort is 209715200

314572800 + 209715200 = 524288000 - 500MB on the dot.

So this does somehow seem to be hitting that limit.  But only when writing in chunks and it seems to share that limit with the blob stored only in memory.  Strange.  Perhaps when saving the blob as whole, it doesn't need to make a copy of the data?

FWIW this does work in Canary without problems.  Any chance of this fix making it into Beta (v56) rather than waiting for v57 to be released in March?

Either way.  Thanks for the help.


index.html
6.0 KB View Download

Comment 4 by jsb...@chromium.org, Jan 19 2017

Mergedinto: 375297
Status: Duplicate (was: Assigned)
It's a memory limit - clearing from the database has no effect. See the other bug for context.

No chance of merging to 56. The limit has been in place for years, and it was a substantial change to eliminate the limit.

Sign in to add a comment