Microsoft.com unreachable? (COMPRESSION_ERROR) |
||||||||
Issue descriptionI've set RESTRICT-VIEW-GOOGLE as my network log was captured with full data. Chrome Version: 61.3150 OS: Windows 10 What steps will reproduce the problem? (1) Open https://www.microsoft.com in my normal profile What is the expected result? Page loads What happens instead? ERR_CONNECTION_RESET This doesn't reproduce in an Incognito profile, making me wonder whether this is an issue where the server doesn't like how we're encoding cookies? t=241287 [st=118] HTTP2_SESSION_RECV_GOAWAY --> active_streams = 1 --> debug_data = "" --> error_code = "9 (COMPRESSION_ERROR)" --> last_accepted_stream_id = 1 --> unclaimed_streams = 0
,
Jul 10 2017
Reminds me of issue 725079 and issue 735055 . Are you using AVG antivirus by any chance? Do you experience the same error with AVG temporarily disabled?
,
Jul 10 2017
,
Jul 10 2017
This is a gWindows 10 system with no AV beyond the default Windows Defender.
,
Jul 10 2017
Thank you. I'll look into the network log and try to replay your request against https://www.microsoft.com to see if I can reproduce this issue.
,
Jul 18 2017
For what it's worth, if I use curl to replay the request (including the gigantic "cookie" header) agains https://www.microsoft.com, it does not send a GOAWAY but responds politely with a 302. Of course one has to keep in mind that curl has its own HPACK encoder. This is just to rule out obvious hiccups like invalid characters in the cookie.
,
Jul 19 2017
I replayed the exact binary data taken from the network log against https://www.microsoft.com, and got a 200 response without any errors. They could have possibly fixed the server. Could you please try to reproduce now? At the same time, I'll see if the IP address that my client resolves to matches the one from the log, and if not, I'll connect to that specific IP address.
,
Jul 19 2017
Wild! For me, this still appears to repro in the latest Canary.
,
Jul 19 2017
Thank you for providing more feedback. Adding requester "bnc@chromium.org" to the cc list and removing "Needs-Feedback" label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Jul 19 2017
Oops, never mind, I had made a mistake. Now I can reproduce it. I'll be working on figuring out what exactly triggers the error.
,
Jul 20 2017
What happens is that the server advertises a SETTINGS_MAX_HEADER_LIST_SIZE value of 16 kB, but the request headers are longer than that due to very long cookies. Hence the server is unhappy about the headers, which it expresses by emitting a COMPRESSION_ERROR. Below is a unittest to reproduce the issue. There is no good way to solve this problem. The server controls both the limit on headers it is willing to accept and the cookies themselves that are making the headers too large. This ultimately should result in a user-facing error. One could argue that an ERR_CONNECTION_RESET, while technically correct, is not particularly revealing. On the other hand, while some kind of clever retry mechanism could be able to paper over this issue (retry with HTTP/1.1 hoping that the server does not mind large headers there; retry without cookies), it would not actually solve the problem. Therefore I am closing this as WontFix. For your particular situation, I recommend considering manually clearing the cookies for www.microsoft.com. TEST(RealNetworkTraffic, MicrosoftCompressionError) { HttpNetworkSession::Params params; HttpNetworkSession::Context context; MockCertVerifier cert_verifier; cert_verifier.set_default_result(OK); context.cert_verifier = &cert_verifier; CTPolicyEnforcer ct_policy_enforcer; context.ct_policy_enforcer = &ct_policy_enforcer; TransportSecurityState transport_security_state; context.transport_security_state = &transport_security_state; DoNothingCTVerifier cert_transparency_verifier; context.cert_transparency_verifier = &cert_transparency_verifier; std::unique_ptr<ProxyService> proxy_service = ProxyService::CreateDirect(); context.proxy_service = proxy_service.get(); SSLConfigService* ssl_config_service = new SSLConfigServiceDefaults; context.ssl_config_service = ssl_config_service; HttpServerPropertiesImpl http_server_properties; context.http_server_properties = &http_server_properties; TestProxyDelegate proxy_delegate; context.proxy_delegate = &proxy_delegate; MockCachingHostResolver host_resolver; host_resolver.rules()->AddIPLiteralRule("www.microsoft.com", "2600:1402:e:299::747", ""); context.host_resolver = &host_resolver; HttpNetworkSession session(params, context); const HostPortPair destination("www.microsoft.com", 443); ProxyInfo proxy_info; proxy_info.UseDirect(); HttpRequestInfo request; request.method = "GET"; request.url = GURL("https://www.microsoft.com/"); SSLConfig server_ssl_config; SSLConfig proxy_ssl_config; session.GetSSLConfig(request, &server_ssl_config, &proxy_ssl_config); ClientSocketHandle connection; TestCompletionCallback callback; int rv = InitSocketHandleForHttpRequest( ClientSocketPoolManager::SSL_GROUP, destination, request.extra_headers, request.load_flags, DEFAULT_PRIORITY, &session, proxy_info, /* expect_spdy = */ true, server_ssl_config, proxy_ssl_config, request.privacy_mode, NetLogWithSource(), &connection, OnHostResolutionCallback(), callback.callback()); rv = callback.GetResult(rv); EXPECT_THAT(rv, IsOk()); EXPECT_TRUE(connection.socket()->IsConnected()); EXPECT_TRUE(connection.socket()->WasAlpnNegotiated()); EXPECT_EQ(kProtoHTTP2, connection.socket()->GetNegotiatedProtocol()); SpdyHeaderBlock headers; headers[":method"] = "GET"; headers[":authority"] = "www.microsoft.com"; headers[":scheme"] = "https"; headers[":path"] = "/"; headers["cookie"] = SpdyString(16164, 'a'); size_t size = 0; for (const auto& kv : headers) { size += kv.first.size() + kv.second.size() + 32; } // One byte longer that what the server is willing to accept. EXPECT_EQ(16385u, size); HpackEncoder encoder(ObtainHpackHuffmanTable()); SpdyString encoded_headers; EXPECT_TRUE(encoder.EncodeHeaderSet(headers, &encoded_headers)); // HTTP/2 connection preface std::string raw_data( test::a2b_hex("505249202A20485454502F322E300D0A0D0A534D0D0A0D0A")); // Initial SETTINGS frame, empty. raw_data.append(test::a2b_hex("000000040000000000")); // Length of HEADERS frame. raw_data.append( test::a2b_hex(SpdyStringPrintf("%06x", encoded_headers.size()).c_str())); // Rest of HEADERS frame header. raw_data.append(test::a2b_hex("010500000001")); // HEADERS payload. raw_data.append(encoded_headers); scoped_refptr<IOBuffer> buffer = new IOBuffer(16 * 1024); memcpy(buffer->data(), raw_data.data(), raw_data.size()); rv = connection.socket()->Write(buffer.get(), raw_data.size(), callback.callback()); rv = callback.GetResult(rv); EXPECT_EQ(static_cast<int>(raw_data.size()), rv); EXPECT_TRUE(connection.socket()->IsConnected()); rv = connection.socket()->Read(buffer.get(), 16 * 1024, callback.callback()); rv = callback.GetResult(rv); EXPECT_EQ(65, rv); // SETTINGS frame header. SpdyString expected_result("00001E040000000000"); // SETTINGS_HEADER_TABLE_SIZE = 0x1000 expected_result.append("000100001000"); // SETTINGS_MAX_CONCURRENT_STREAMS == 0x64 expected_result.append("000300000064"); // SETTINGS_INITIAL_WINDOW_SIZE = 0xffff expected_result.append("00040000FFFF"); // SETTINGS_MAX_FRAME_SIZE = 0x4000 expected_result.append("000500004000"); // SETTINGS_MAX_HEADER_LIST_SIZE = 0x4000 expected_result.append("000600004000"); // SETTINGS ACK expected_result.append("00000004010000000000"); // GOAWAY, last stream id = 1, error code = COMPRESSION_ERROR expected_result.append("00080700000000000000000100000009"); SpdyString result(buffer->data(), rv); EXPECT_EQ(test::a2b_hex(expected_result.c_str()), result); rv = connection.socket()->Read(buffer.get(), 16 * 1024, callback.callback()); EXPECT_THAT(rv, IsOk()); EXPECT_FALSE(connection.socket()->IsConnected()); }
,
Jul 20 2017
Fascinating, thanks for looking at this. If the server advertises that it's only going to accept 16KB of header data, and the net stack thinks it needs to send more than 16kb of Header data, wouldn't it make more sense for Chrome to abort the request with some form of "Request too large" notice to the user? Any objections to making this public now that I've removed my log file?
,
Jul 20 2017
I have no objections to making this public, in fact, that's preferred in general. Since servers are not mandated to refuse requests that are larger than what they have advertised they would accept, I would prefer not to abort a request before it is sent to the server. However, if the server signals the error with a GOAWAY frame, it is not associated to any given stream. And COMPRESSION_ERROR is not specific to the request being too large. So determining when such a specific error message needs to be displayed is complicated. (Not impossible, not even necessarily not worthwhile, but complicated.)
,
Jul 20 2017
|
||||||||
►
Sign in to add a comment |
||||||||
Comment 1 by elawrence@chromium.org
, Jul 7 2017