Issue metadata
Sign in to add a comment
|
ERR_INCOMPLETE_CHUNKED_ENCODING with large chunked response
Reported by
ncorti...@gmail.com,
Nov 23 2016
|
||||||||||||||||||||||||
Issue description
Chrome Version : 54.0.2840.98 (Official Build) (64-bit)
Other browsers tested:
Safari:OK
Firefox:OK
What steps will reproduce the problem?
(1) Load a large chunked response (1000000 chunks) on the browser
What is the expected result?
No ERR_INCOMPLETE_CHUNKED_ENCODING error
What happens instead?
ERR_INCOMPLETE_CHUNKED_ENCODING error
node.js server script:
const http = require('http');
const numberOfItems = 1000000;
const data = new Array(1000 + 1).join('D');
const server = http.createServer((request, response) => {
let count = 0;
response.statusCode = 200;
while(count < numberOfItems) {
count++;
response.write(data)
}
response.end();
});
server.listen('8080', 'localhost', () => {
console.log('Server running at http://localhost:8080');
});
,
Nov 28 2016
,
Nov 28 2016
This is generally caused by not sending an empty terminal chunk, and instead just closing the connection. Are you sending such a chunk? (Sorry, I don't know node.js's API).
,
Nov 29 2016
I think that the response is properly formatted. According to the node.js documentation (https://nodejs.org/api/http.html#http_request_end_data_encoding_callback): request.end... .. Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. If the request is chunked, this will send the terminating '0\r\n\r\n'. Please take under consideration that the same example works when you either reduce the number of chunks or you switch to another browser like Safari or Firefox.
,
Nov 29 2016
Neither Safari nor FireFox require the terminating chunk (Which I consider a bug in both browsers, since there's no way to tell if the file is truncated, in that case). The only place that can return this error is https://cs.chromium.org/chromium/src/net/http/http_stream_parser.cc?q=ERR_INCOMPLETE_CHUNKED_ENCODING&sq=package:chromium&dr=C&l=745. Given the very low incidence of this error, I'm almost certain the issue is at the server side. I'd ask for a net-internal log to verify that, but the size, with all those chuncks, is just too big. Can you verify via wireshark or something that the terminal chunk is indeed sent?
,
Nov 29 2016
Ok, I didn't know that. I'll try to get a tcp dump.
,
Dec 1 2016
Attaching tcp dump captured with wireshark.
,
Dec 1 2016
Looks to me like that request does not end with an empty chunk, it just cuts of with a CRLF (packet 4004), and then a FIN (packet 4005). Do you see something I'm missing? This may be a node.js issue.
,
Dec 2 2016
When I was testing it, I managed to sometimes see the empty chunk and sometimes not. There seems to be a race condition somewhere. In fact, if I follow the TCP stream in that pcap, there are only 2140 instances of 3e8 (1000) in hex. So of the 1000000 chunks you are trying to write, Node is only sending about 2140 of them before closing the connection. Are you sure Node is properly draining its write buffer before closing the connection? (PS: I hope your real server does not look like this. If your server writes without backpressure like that, it's pretty easy to DoS.)
,
Dec 2 2016
,
Dec 2 2016
Ok, I see what is the problem. The server side code is overflowing the write buffer and ignoring the fact. Eventually Node closes the connection. Obviously it is possible to rewrite that code to wait until the write buffer is drained. I was very misled by the fact that other browsers did not complain, but it is an error on the server side nonetheless. Thanks for looking into this. Juan
,
Dec 2 2016
Right, the reason we enforce the required trailing chunk is precisely so we can detect this kind of unintended truncation. :-) This looks like it might be a bug in Node itself though. It appears that response.end() is supposed to drain things before internally closing things? I'm not sure why it isn't. (Whether this is a desirable API is another matter due to the DoS here. A server streaming out data should always pay attention to when writes are flushed and keep a bounded amount of data in the write buffer. This API makes it really easy to buffer unboundedly.)
,
Feb 15 2017
,
Jul 6
,
Jul 6
|
|||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||
Comment 1 by manoranj...@chromium.org
, Nov 23 2016