PUT/DELETE requests are stuck in Pending state
Reported by
rob.a.st...@gmail.com,
Jul 12 2016
|
|||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 Example URL: Steps to reproduce the problem: We can reproduce this problem very reliably on one of our internal systems, one of our customers also sees the same problem. What is the expected behavior? PUT/DELETE requests complete succesfully What went wrong? PUT/DELETE requests frequently get stuck in a 'Pending' state, this leads to stuck connections and eventually the browser runs out of connections to the server. The problem only occurs when running Windows Chrome against a Red Hat server that is running JBOSS with Wildfly 8.1. The problem doesn't occur when using OSX/Linux Chrome or any other browsers (Firefox, Internet Explorer). I will attach a file that shows the trace obtained using chrome://net-internals/#events showing a failing and working DELETE request. The problem seems to be caused by the browser entering the 'HTTP_STREAM_PARSER_READ_HEADERS' state twice and downgrading the connection to HTTP 0.9 when doing this. Did this work before? N/A Chrome version: 51.0.2704.106 Channel: stable OS Version: 6.1 (Windows 7, Windows Server 2008 R2) Flash Version: Shockwave Flash 22.0 r0 The PUT/DELETE requests are part of a sequence of requests. Currently the only way we can work around the problem is to add delays before sending the PUT?DELETE request, some experimentation shows that a delay of 500ms seems to be enough. This is obviously not an ideal solution.
,
Jul 13 2016
,
Jul 13 2016
Some more information now that I have managed to fix the issue properly on our system. I noticed that the common factor before all of the failing/pending requests was that the previous response was a 204. Using Fiddler suggested that occasionally the 204 responses were including spurious content, I'm guessing that this content was getting picked up by the NEXT response and causing the duplicate header read (and dropping down to HTTP 0.9) problem. Luckily we have a small proxy between our client and the application server and I was able to modify it to ensure that 204 responses are now guaranteed to have no content. This stopped the issue with Chrome. Things I am still unsure of: 1) Why the spurious content existed and in particular why we only see it when the server is running Linux although I accept this isn't anything to do with Chrome 2) Why we only see the problem with Chrome for Windows and not the other OSes? I'm going to take a guess that maybe the low level network handling is different across OSes so maybe this is why.
,
Jul 14 2016
Thanks for the followup. It would be nice to know the answer to 2) but since the root cause of the problems seems not with Chrome, I'm going to go ahead and mark this issue as archived.
,
Jul 15 2016
The network stack's handling of responses is the same across all OSes. It could be differences in platform-specific TCP socket behavior, though. If we see the extra junk on a socket before we try and reuse the socket, we'll close the connection and create a new one. If we see it after we try and reuse the socket, though, we're in trouble. There was also some weirdness discovered in Chrome's logic here - if we read all available extra data from while reading the HTTP headers, we'd still reuse the socket, instead of throwing it out after processing the response. i.e., if we got: HTTP/1.1 200 Content-Length: 0 Random junk that's not part of the HTTP response. We'd reuse the socket if we read that entire response at once, despite knowing we had extra data. And then if we received more extra cruft after that, we'd treat it as an HTTP/0.9 response. If we had unread cruft on the socket when we try and reuse it, though, we'd correctly throw away the socket. I recently landed code to always throw away the socket in that case, but it's not on stable yet. Regardless, this is just a mitigation for server bugs, and to make our behavior closer to other browsers. The servers should (obviously) be fixed. |
|||
►
Sign in to add a comment |
|||
Comment 1 by rob.a.st...@gmail.com
, Jul 12 2016