window.fetch - large number of POSTs with increasing size fail after ~600+ requests.
Reported by
aviadla...@gmail.com,
Oct 1 2016
|
|||||||||
Issue description
UserAgent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Example URL:
http://localhost
Steps to reproduce the problem:
Run client_main.js (below) in the browser.
Also adding server_main.ts dummy express.js app that serves the client code and swallows the requests.
2. client_main.js:
function faultIt() {
let nextSize = 0;
let n = 0;
function doOne() {
window.fetch('/noop', {
method: 'POST',
body: new Blob([new Uint8Array(nextSize)])
}).then(function() {
n += 1;
console.log(`${n} uploaded`);
doOne();
nextSize += 8000;
}).catch(function(err) {
console.log("error: ", err);
debugger;
});
}
doOne();
}
faultIt();
1. server_main.ts (node.js , express.js):
import * as express from "express";
import * as fs from "fs";
let app = express();
function serveHome(req: express.Request, res: express.Response) {
res.send('<html><head><script>' +
fs.readFileSync('./client_main.js') + '</script>');
}
app.get('/', serveHome)
app.post(new RegExp("/noop"), doNoop);
function doNoop(req: express.Request, res: express.Response) {
res.send({ok: 1});
}
app.listen(3344);
What is the expected behavior?
Upload without problems
What went wrong?
1. Console spat error: POST http://localhost:3344/noop net::ERR_FILE_NOT_FOUND
2. Fetch promise rejected with the error:
error: TypeError: Failed to fetch
at TypeError (native)
Did this work before? No
Chrome version: 53.0.2785.116 Channel: n/a
OS Version: 10.0
Flash Version: Shockwave Flash 23.0 r0
If working with constant buffer size (1M) this doesn't happen.
The request was never sent to the network (as seen in wireshark) - failure occurred before that.
Also happens with "Connection: close".
Each time this test is run, it fails after another number of times, seemingly at random.
,
Oct 3 2016
,
Oct 3 2016
To reproduce more easily: 1. extract the attached somewhere 2. cd somewhere 3. install node.js (https://nodejs.org/en/download/) 4. Run: node ./app.js 5. Open chrome, open devtools, go to http://localhost:3344
,
Oct 3 2016
It looks to me like the failed stage has to be where we're initializing the UploadDataStream that's failing (We have a connected socket, we haven't yet started sending headers. Seems like that's the only thing in between, and that can fail with file errors). That would be blob code, not network code, so changing labels.
,
Oct 3 2016
,
Oct 4 2016
The last successful one was 5,512,000 bytes. There are about 690 uploads, incrementing in size at 8k each, which gets us about 2 GB of total blobs. Is the script destroying old blobs?
,
Oct 4 2016
Probably there aren't any memory leaks. The script isn't holding any explicit references to the created blobs or requests. As far as I understand, once the fetch promise is resolved/rejected, the underlying fetch request should be released as well.
,
Oct 4 2016
Confirmed on Chrome Canary (55) on Mac. I modified the example to remove the need to start DevTools. This removes the possibility of a memory issue in DevTools. Step size 2,000 - Choked at #2094: 4,188,000 / 4,386,930,000 Step size 4,000 - Choked at #1273: 5,092,000 / 3,243,604,000 Step size 8,000 - Choked at #781: 6,248,000 / 2,442,968,000 Step size 100,000 - Choked at #204: 20,400,000 / 2,091,000,000 Step size 10,000,000 - Choked at #15: 150,000,000 / 1,200,000,000 Step size 100,000,000 - Choked at #4: 400,000,000 / 1,000,000,000 Step size 200,000,000 - Choked at #2: 400,000,000 / 600,000,000 Step size 400,000,000 - Choked at #2: 800,000,000 / 1,200,000,000 Step size 500,000,000 - Choked at #2: 1,000,000,000 / 1,500,000,000 Step size 600,000,000 - Choked at #1: 600,000,000 / 600,000,000 Step size 1,000,000,000 - Choked at #1: 1,000,000,000 / 1,000,000,000 The lower step sizes show that a global quota is being hit. The higher step sizes demonstrate that individual Blob instances are limited at 512mb, so the lower step sizes aren't hitting a Blob size limit. (I already knew that from the code) I added Blob.close() calls and enabled the experimental Web Platform flag, and the results didn't seem to budge. Step size 8,000 - Choked at #760: 6,080,000 / 2,313,440,000 Step size 100,000 - Choked at #216: 21,600,000 / 2,343,600,000 dmurph: This seems somewhat similar to http://crbug.com/554678 -- Can you please take a look? Will your Blob rework solve this?
,
Oct 4 2016
I forgot to attach the modified test case. This works when served from any HTTP static file server, no extra setup needed.
,
Oct 4 2016
,
Oct 5 2016
If this is happening around 500, then I highly expect that this is due to the memory limit. I find it strange that close() isn't working, I'd like to investigate that.
,
Oct 18 2016
Because every invocation chokes at a different point, there is probably a not-very-deterministic player here. I would guess it's around the GC, maybe because it fires at different time points, generating different patterns of heap fragmentation.
,
Feb 13 2017
Can you try again on Canary? I think this should be fixed.
,
May 12 2017
This seems to be better in versions after 57 (likely due to the resolution of the blob memory limit ticket). However, it still happens after a while. This needs to be resolved so we can handle larger files (I run into issues with files as small as 2GB, and a chunk size of between 1MB and 16MB). The whole reason to chunk it is so it doesn't all have to be in memory at once, but there appears to be some limit even with chunking.
,
May 15 2017
Do you do something where you set responseType: blob?
,
Jun 15 2018
,
Jun 15 2018
,
Jan 2
Closing this as fixed, as the blob limits have been removed - please re-open if you are still seeing this |
|||||||||
►
Sign in to add a comment |
|||||||||
Comment 1 by aviadla...@gmail.com
, Oct 1 2016