ERR_INCOMPLETE_CHUNKED_ENCODING when connection closes without last chunk |
|||||
Issue description
Chrome version: 50.0.2661.94 (non-regression, it also occurs in 34.0.1847.137).
Steps to reproduce:
1. Download the attached files:
- with-end.http-response - This is a fully standard-compliant chunked response (RFC 2730, section 4.1).
- no-end.http-response - This is the same as the above, except "last-chunk CRLF" ("0\r\n\r\n") is missing.
2. Serve the raw response with netcat.
OS X : nc -l 3333 < with-end.http-response
Linux : nc -l -p 3333 < with-end.http-response
3. Open http://localhost:3333 in Chrome and confirm that you see "abcdef".
4. Now you've verified that the test method works, repeat step 2 with the "no-end.http-response" file.
5. Visit http://localhost:3333 in Chrome, again.
Expected:
- Chrome should display "abcdef", just like step 3.
Actual:
- The page stays blank.
- Open Chrome's devtools and observe that the following error is displayed: net::ERR_INCOMPLETE_CHUNKED_ENCODING
This works as expected in Firefox 46.
Background information:
This is a reduced test case. I develop the PDF Viewer Chrome extension, and a user reported that they couldn't view a PDF file with my extension even though it worked with the built-in extension (PDFium). It turns out that the server (ISS 7.0, ASP.NET 4.0.30319) sent a chunked response without a "0\r\n\r\n" at the end. In Firefox 42,
The PDF was confidential, so I created a new HTTP response by hand that accurately represents the problem (reduction.http-response).
a. Use the above step (step 2 and 3) with reduction.http-response. Chrome's built-in PDF reader opens it just fine (Open net-internals-log-pdfium.json at chrome://net-internals#import to see the requests).
b. Install my PDF Viewer (https://chrome.google.com/webstore/detail/pdf-viewer/oemmndcbldboiebfnladdacbdfmadadm), and open chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/http://localhost:3333/ in the browser. The PDF fails to load (via XMLHttpRequest), and Chrome's devtools displays "Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING" (see net-internals-log-pdfjs-extension.json).
The fact that the built-in PDF reader manages to process the stream indicates that Chrome at least knows how to process the response despite the missing last chunk. The type of response seems not uncommon (e.g. https://stackoverflow.com/q/22219565). Since Firefox (and apparently also IE) know how to process this response, Chrome should also be able to process responses that lack the last chunk when the connection closes.
Relevant specifications:
https://tools.ietf.org/html/rfc7230#section-4.1 specifies the format:
chunked-body = *chunk
last-chunk
trailer-part ; may be empty
CRLF
chunk = chunk-size [ chunk-ext ] CRLF
chunk-data CRLF
chunk-size = 1*HEXDIG
last-chunk = 1*("0") [ chunk-ext ] CRLF
chunk-data = 1*OCTET ; a sequence of chunk-size octets
When chunked-body is just "*chunk" and the connection closes, Firefox processes the response as if there was an imaginary "0 CRLF" (last-chunk CRLF). Chrome, on the other hand, cries net::ERR_INCOMPLETE_CHUNKED_ENCODING and refuses to accept the response (unless the response is handled by the built-in PDF reader...?!?).
https://tools.ietf.org/html/rfc7230#section-3.4 specifies the behavior for incomplete messages:
A client that receives an incomplete response message, which can
occur when a connection is closed prematurely or when decoding a
supposedly chunked transfer coding fails, MUST record the message as
incomplete.
...
A message body that uses the chunked transfer coding is incomplete if
the zero-sized chunk that terminates the encoding has not been
received. A message that uses a valid Content-Length is incomplete
if the size of the message body received (in octets) is less than the
value given by Content-Length.
Chrome is already treating incomplete responses as complete for downloads:
https://chromium.googlesource.com/chromium/src/+/f6c40587bc9ee68ef8102992ea5c52cc8e0c6933/content/browser/download/download_request_core.cc#534
Can this behavior be extended to any other response?
,
May 8 2016
Also worth noting papering over this server bug encourages buggy, slow websites. If we supported this behavior, some websites would "work", but only because they hit their connection timeout after we wait how many ever seconds for them to timeout the TCP connection for more chunks. We've seen that behavior before - every chunked response would take something like 30 seconds, but "worked" on firefox because of this behavior.
,
May 8 2016
The links at https://crbug.com/501384#c3 refer to evidence that there are quite some servers that neglect to send a zero-length chunk. Ideally, servers would fix the problem by being standard-compliant. But failing that, there is nothing that Chrome users can do at the moment to see the content anyway. I tested the above with Firefox 46, Safari 9 and Microsoft Edge 13, and all allow closed connections without the final zero-length chunk. Furthermore, if there was a delay between receiving the chunk and closing the connection, Chrome would render the content before printing ERR_INCOMPLETE_CHUNKED_ENCODING to the console (so there are now three examples where Chrome does show the content, the other two are in my first report: downloads and PDF reader). Do you see any issues with accepting a response if the chunked response seems valid (except for the missing last chunk and CRLF) and the connection is closed soon-ish after receiving the last response (e.g. 1 second)? That would cover the case where servers close the connection immediately after sending the last data chunk (without the last zero-length chunk), and address your concern of delaying responses by 30 seconds (to be honest, I don't understand why treating closed connections as complete responses would stall requests for 30 seconds). And to ease debugging, the error could be logged to the JS console and/or the network log (chrome://net-internals).
,
May 8 2016
Hrm...At the HTTP layer, the network stack itself behaves identically, regardless of whether we see the close immediately or not - we read from the socket once and pass along everything we have. Then if we see an error on the next read, we return it. There could be differences at the SSL layer, or it could be higher up the chain, in the ResourceLoader stack...Or it could be in blink. Regardless, doesn't seem to be an issue in the HTTP code.
,
May 8 2016
And the issue with accepting the response as correct if we don't see the final chunk is we have have a partial download. I'm not so concerned about the HTML case, as with everything else (Partial JS doesn't exactly work well, either). It's an error, and should be surfaced as such.
,
May 8 2016
OH, and it also shouldn't be cached as a full, correct response, for the same reason.
,
May 8 2016
The decision seems to be made in HttpStreamParser::DoReadBodyComplete [1] (it is the only source of the error according to codesearch). > And the issue with accepting the response as correct if we don't see the final chunk is we have have a partial download. I think that this issue is already present in the current version of Chrome because of the special handling for downloads [2]. [1] https://chromium.googlesource.com/chromium/src/+/4cf6e0c8c2c179eaa6ba02583a2861a2f85ca474/net/http/http_stream_parser.cc#683 [2] https://chromium.googlesource.com/chromium/src/+/f6c40587bc9ee68ef8102992ea5c52cc8e0c6933/content/browser/download/download_request_core.cc#534
,
May 8 2016
You said if the connection close is delayed, it's rendered - that decision is certainly not made in HttpStreamParser. HttpStreamParser reads and returns all it can. If the next read sees a connection close, it returns an ERR_INCOMPLETE_CHUNKED_ENCODING. Doesn't have anything that would explain rendering partial content only if there's a delay before the error (Which I agree is weird).
,
Jun 22 2016
Davidben has pointed out that 0.00% main frame and subresource loads result in ERR_INCOMPLETE_CHUNKED_ENCODING. Given that extremely low rate, I'm going to go ahead and close the bug, and it seems like we should remove handling for it in the downloads code, too.
,
Jun 22 2016
(There is also something to be said about security consequences for atomically-processed HTTPS resources like JavaScript if we don't do this thanks to close_notify in TLS being hopeless. Although I don't know if we enforce Content-Length being too long or HTTP/2 streams that don't end correctly. If we're in one of the cases where we must rely on transport close, we have no hope.)
,
Jun 22 2016
Worth noting we have a similar hack for ERR_CONTENT_LENGTH_MISMATCH, which I'm not currently removing.
,
Jun 23 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/3d551ff2ccf5599ebc08037d34b2c1f3b3038874 commit 3d551ff2ccf5599ebc08037d34b2c1f3b3038874 Author: mmenke <mmenke@chromium.org> Date: Thu Jun 23 01:21:01 2016 Remove hack to treat ERR_INCOMPLETE_CHUNKED_ENCODING error as success. Only applies to downloads. We were treating it as an error elsewhere (Unless there's also a content-length header which has the same value as the number of bytes the response decompresses to...Which is probably a bug). 0.00% of main frame loads end with this error, so seems like this hack isn't needed. BUG= 501384 , 610126 Review-Url: https://codereview.chromium.org/2092523002 Cr-Commit-Position: refs/heads/master@{#401499} [modify] https://crrev.com/3d551ff2ccf5599ebc08037d34b2c1f3b3038874/content/browser/download/download_request_core.cc
,
Sep 29 2016
Hi, This change affects some of our vendor's websites for their downloads. Can you reconsider reverting the change for downloads, to allow ERR_INCOMPLETE_CHUNKED_ENCODING? We are unable to confirm with all our vendors for them to fix their websites so for the interim, we are advising users to use IE or Firefox as an alternative.
,
Sep 29 2016
Given the general difficulty of getting rid of old broken behavior for downloads, and how few complaints we've had about it since it went to stable a month ago (1, not including yours - issue 649511 ), I don't think we should bring it back. Note that if we brought back the broken behavior, and someone else anywhere on the entire planet added a new server that depended on the broken behavior, we'd be back in the same place with respect to them.
,
Sep 30 2016
We are working in an enterprise environment supporting over 25000 users, this report is on behalf of these users. It takes time to filter up user complaints and to then locate the cause of the issue. It is also reported to the vendor & they also have to triage on their side. That is why this is the first post from after 3 weeks (since 2016-09-07) of it being released. You can see another report of this issue at https://productforums.google.com/forum/#!topic/chrome/8Aykgxa8kWU The acceptance of a connection without trailing chunk is deemed apparently acceptable in IE and Firefox, likely due to: https://en.wikipedia.org/wiki/Robustness_principle Be conservative in what you do, be liberal in what you accept from others (often reworded as "Be conservative in what you send, be liberal in what you accept").
,
Sep 30 2016
Unfortunately, in the general case, that both requires all browsers to behave the same in unspecified cases, and in practice, browsers rarely agree in all corner cases, so it's best to stick to the spec, and fail on errors. In this particular case, it can result in users with a corrupted file - we have no way to know the file is complete. Also worth noting that being liberal in what you accept can result in security bugs (And this has happened in the past). Is there a case where an attacker truncating a file being downloaded over a secure connection result in a security issue? I can't tell you, and thinking about it makes my brain hurt.
,
Feb 15 2017
Issue 649511 has been merged into this issue.
,
Feb 15 2017
,
Feb 15 2017
Issue 461213 has been merged into this issue.
,
Feb 15 2017
Issue 691479 has been merged into this issue.
,
May 26 2017
Is there any potential to implement a flag allowing those of us who enjoy using Chrome to be able to do so on servers which aren't well behaved? In my use case, Amazon (they have a rather small bit of servers under the AWS banner) occasionally gives this error in a result where there isn't any file being transferred and there is no danger in treating incomplete data as complete when the server closes the connection early. Having a flag to enable 'magical non-spec compliant behavior' to allow users to actually be able to use the same broken servers as other browsers, without forcing users to actually use those browsers, would probably fill many's definition of a feature which is absolutely necessary. There's likely a lack of complaints, as people read the chilly and almost hostile reception to these reports and just don't bother. Other than from a principle standpoint, is it very difficult to implement a code path that can treat a suddenly terminated connection as "\0\r\n\r\n", based on whether a flag is set?
,
May 26 2017
If it's Amazon, I imagine they could deploy a fix to their servers relatively quickly. Have you reported a bug with them?
,
Jul 6
,
Jul 6
|
|||||
►
Sign in to add a comment |
|||||
Comment 1 by mmenke@chromium.org
, May 8 2016