New issue
Advanced search Search tips

Issue 902273 link

Starred by 3 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Linux , Android , Windows , Chrome , Mac
Pri: 1
Type: Bug-Regression



Sign in to add a comment

Blob download fails with net::ERR_INSUFFICIENT_RESOURCES

Reported by mrsk...@gmail.com, Nov 6

Issue description

UserAgent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36

Example URL:
https://stat-info.cz/chrome-range-download.html

Steps to reproduce the problem:
1. Open developer console
2. Hit the "Download with ETag header" button
3. You'll see error in the console

What is the expected behavior?
Successfully downloaded Blob.

What went wrong?
Blob is not downloaded and net::ERR_INSUFFICIENT_RESOURCES is shown in console.

Did this work before? N/A 

Chrome version: 70.0.3538.77  Channel: stable
OS Version: 
Flash Version: 

Looks like this problem is related to 206 responses with ETag header.

When ETag header is present Content-Length header is replaced with value from the Content-Range header.

Headers sent from server:
Content-Length: 1048576
Content-Range: bytes 1000-1049575/100000000000

Headers shown by Chrome in Network tab of dev tools:
Content-Length: 100000000000
Content-Range: bytes 1000-1049575/100000000000

So I guess that Chrome stops the request because the new incorrect Content-Length value is too large to be handled in memory?
 
Components: -Internals>Network Blink>Storage>FileAPI
There are not a whole lot of ways in the network stack itself to get ERR_INSUFFICIENT_RESOURCES in the middle of a request, so suspect this comes from the blob layer (And the fact that it's a range request does seem likely related).  On Canary, with the network service enabled, it fails with ERR_FAILED instead, curiously.

At least this very reliably repros, so should hopefully be relatively easy to figure out.
Components: Blink>Loader
I used a debugger on content_shell and found the ERR_INSUFFICIENT_RESOURCES is coming from here:
https://cs.chromium.org/chromium/src/content/browser/loader/mojo_async_resource_handler.cc?q=ERR_INSUFFICIENT_RESOURCES+-file:test+-file:src/net&sq=package:chromium&l=277&dr=C
Could you mark this issue as a regression? Last working version is 66
Labels: Needs-Triage-M70
Labels: -Type-Bug RegressedIn-67 OS-Android OS-Chrome OS-Mac OS-Windows Type-Bug-Regression
Owner: mek@chromium.org
Status: Assigned (was: Unconfirmed)
I bisected.
090b1a4f (#547248) NG
e5d79e6c (#547247) OK


090b1a4fea8bd71d2bd5e3503d3397985ad58022 which landed in 67.0.3385.0 introduced this issue.

mek@
Could you please handle this issue?

Labels: -Pri-2 Triaged-ET Target-70 Target-71 Target-72 M-72 FoundIn-71 FoundIn-70 FoundIn-72 hasbisect Pri-1
Able to reproduce the issue on Windows 10, mac 10.13.6 and Ubuntu 17.10 using chrome reported version #70.0.3538.77 and latest canary #72.0.3604.0.

Bisect Information:
=====================
Good build: 67.0.3384.0
Bad Build : 67.0.3385.0

As per comment #5, providing the possible suspect which is as follows:
Change-Id: I258150937c424b72824dadbc2832764ed54c1ca5
Reviewed-on: https://chromium-review.googlesource.com/955957

mek@ - Could you please check whether this is caused with respect to your change, if not please help us in assigning it to the right owner.

Thanks...!!
It could be related to my change in so much that an incorrect Content-Length (which the first comment seems to mention?) before would just be ignored, while after my change will result in the system just giving up in advance since it knows it is never going to be able to fit all that data... But I wonder if comment #2 is correct in that case, since I would expect failure in a different place in that case...
Weirdly enough I can't reproduce this though...  Actually, scratch that, I can't reproduce it on my workstation, but that's probably because that machine has way too much memory and disk space so the overly large Content-Length is still fine to fit in the (dynamically calculated) blob limits. On my laptop it does reproduce.
Actually, scratch more of what I said. The blob system failing wouldn't result in any error, there is a big TODO in XMLHttpRequest to not silently ignore any errors from the blob system (currently it'll just return an empty blob if the blob system returns any kind of error). So probably comment #2 is correct, and this is failing somewhere in the loading stack itself. Not sure why it doesn't reproduce on my desktop machine though, as that is going to make debugging a lot harder...
(sorry for the talking to myself), ah, it does reproduce on my other desktop, which means I can reproduce it on linux but not on windows. Wondering if there is something OS specific going on... But at least that gives me something I can debug.

Did anybody reproduce this on windows? The original bug unfortunately doesn't list what OS that was found on.
Ah, looking into this more, it might actually be the blob system again anyway. When it fails, it will close the data pipe. And apparently mojo_async_resource_handler deals with the data pipe it is trying to write to (and from which the blob system was reading) being closed by failing with this net::ERR_INSUFFICIENT_RESOURCES (i.e. the spot identified in comment #2 is a MojoBeginWriteData call that is failing with MOJO_RESULT_FAILED_PRECONDITION)
Cc: mek@chromium.org
Components: Internals>Network
Owner: mmenke@chromium.org
And I confirmed (via net log) that the server does indeed sent a correct Content-Length response header, yet somehow when blink and later the blob system look at it they get the invalid 100000000000 value. So this definitely seems to be a bug somewhere much lower in the network/loading stack. I'll try to look into this more, but in the mean time assigning to some arbitrary network person, as the root cause doesn't seem to be anything blob related. The blob system is just getting incorrect data from the loading layer.
And that it only fails with ETags also seems to indicate that it might be the network cache or something that is messing up the Content-Length header?
Cc: morlovich@chromium.org
Components: -Internals>Network Internals>Network>Cache
Owner: ----
Status: Available (was: Assigned)
Only the cache layer does weird stuff in response to range requests, so presumably whatever is modifying the content-length header lives there.
Cc: -morlovich@chromium.org
Owner: morlovich@chromium.org
Status: Assigned (was: Available)
And it does have some fumbling about with content-length as well; in particular it actually stores the total length as content-length in the cached headers. I am not sure what's supposed to return the proper length to the client...
Some of our users are seeing this error even without ETag headers while downloading Blobs via XHR. Unfortunately I can't reproduce this issue for now but here are headers extracted from user's HAR file:

Request:
HTTP Version: HTTP/1.1
Request method: GET
Accept */*
Accept-Encoding gzip, deflate, br
Accept-Language pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7
Connection keep-alive
Content-Type application/x-www-form-urlencoded; charset=UTF-8
Host x
Origin x
Referer x
User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36

Response:
Access-Control-Allow-Credentials true
Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control, Range
Access-Control-Allow-Methods OPTIONS, GET, POST
Access-Control-Allow-Origin x
Connection close
Content-Length 16777216
Content-Transfer-Encoding binary
Content-Type application/octet-stream
Date Thu, 29 Nov 2018 10:37:12 GMT
Server Apache
Strict-Transport-Security max-age=15552000; includeSubDomains

Chrome error
net::ERR_INSUFFICIENT_RESOURCES
From those headers that is not only not using ETag headers, but also not using Range requests, right? Either way it seems like an unrelated issue (only related by the fact that any failure to create a blob will result in this net::ERR_INSUFFICIENT_RESOURCES error). So without further info I'd speculate that that is caused by some problem on the users computer. The output from chrome://histograms/Storage.Blob.BuildFromStreamResult after the error happens could help figure out what exactly is going wrong.
3 users confirmed they didn't have enough of a free disk space. Sorry for the noise.
Hmm, looks like we don't call FixResponseHeaders when giving up on caching because of no-store (though FixResponseHeaders isn't really quite the right thing, either). Probably the same as bug 700197.

Sign in to add a comment