Link preload doesn't load assets with GZIP enabled
Reported by
unknown9...@gmail.com,
Jun 28 2018
|
|||||||
Issue descriptionUserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0 Steps to reproduce the problem: 1. Open Network inspector 2. I created a demo page, which flushes the preload headers, then adds a delay, then flushes the rest of the page Go to https://glitch.com/edit/#!/dynamic-diploma?path=server.js:8:23 > Show Live 3. The waterfall shows assets loaded when the HTML is fully downloaded What is the expected behavior? Assets should be downloaded as soon as the Preload headers are parsed. This happens when not GZIPed. If you remix the demo and remove compression, the waterfall is correct. What went wrong? Assets are loaded once the HTML has been parsed, not when the preload headers are parsed. Did this work before? N/A Does this work in other browsers? No Chrome version: 67.0.3396.99 Channel: n/a OS Version: OS X 10.13 Flash Version: Not sure if this is an Express/Node issue or a browser issue, but if I curl the demo I can see the preload headers > delay > rest of HTML so content is being sent to the browser.
,
Jun 28 2018
,
Jul 2
Could you provide a net-internals log? https://www.chromium.org/for-testers/providing-network-details If possible, please select "Include raw bytes" (please do not load any other sites on the browser while recording, as all network activities will be recorded).
,
Jul 2
This what you're after?
,
Jul 2
Thank you for providing more feedback. Adding the requester to the cc list. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Jul 2
Thank you!
Looking at 3309 (URL_REQUEST) entry, it looks 2 seconds pass before receiving the first byte:
...
t= 114 [st= 114] -HTTP_TRANSACTION_READ_BODY
t= 114 [st= 114] URL_REQUEST_JOB_BYTES_READ
--> byte_count = 10
--> hex_encoded_bytes =
1F 8B 08 00 00 00 00 00 00 03 ... .
t= 114 [st= 114] +HTTP_TRANSACTION_READ_BODY [dt=1996]
t=2110 [st=2110] HTTP2_STREAM_UPDATE_RECV_WINDOW
--> delta = -289
--> stream_id = 11
--> window_size = 6291167
t=2110 [st=2110] -HTTP_TRANSACTION_READ_BODY
t=2110 [st=2110] URL_REQUEST_JOB_BYTES_READ
--> byte_count = 289
--> hex_encoded_bytes =
75 51 3D 73 C3 20 0C DD F3 2B 54 16 2F 76 7C DD uQ=s. ...+T./v|.
3A 98 2C FD B8 6E ED 90 0E 1D 15 AC 04 52 0C 3E :.,..n.......R.>
50 EC F8 DF 17 DB 71 92 BB 5E 35 C0 BD 27 F4 9E P.....q..^5..'..
24 AA 87 97 8F E7 ED F7 E7 2B 68 6E EC 66 55 8D $........+hn.fU.
17 58 74 07 29 C8 89 91 20 AC 37 2B 48 51 35 C4 .Xt.)... .7+HQ5.
08 4A 63 88 C4 52 7C 6D DF 8A 27 71 9F 72 D8 90 .Jc..R|m..'q.r..
14 9D A1 BE F5 81 05 28 EF 98 5C 7A DA 9B 9A B5 .......(..\z....
AC A9 33 8A 8A 09 E4 A0 C9 1C 34 2F E4 8C 72 30 ..3.......4/..r0
CE B0 41 5B 44 85 96 E4 63 0E 0D 9E 4D 73 6A 6E ..A[D...c...Msjn
C4 29 52 98 10 EE 12 E1 7C 0E 8B 61 B1 37 2C 95 .)R.....|..a.7,.
EF 28 5C BA 5A CD BD 59 E3 7E 20 90 95 A2 4D A7 .(\.Z..Y.~ ...M.
C7 5A 80 0E B4 97 22 F2 60 69 AD 62 14 80 F1 02 .Z....".`i.b....
97 81 FE 2D 52 D6 A4 99 D6 C7 A5 48 05 D3 72 AA ...-R......H..r.
BA 2B FB A3 3E E9 4C 38 6A A2 B4 19 1E DA B4 29 .+..>.L8j......)
A6 33 97 E3 83 4D 55 DE D6 9C 54 76 BE 1E 16 30 .3...MU...Tv...0
C6 3B 59 EB A1 F7 C1 D6 77 6C 35 7B CF 62 D9 24 .;Y.....wl5{.b.$
76 C4 0E 67 36 83 18 94 CC AE DD 66 C9 63 CE 5C v..g6......f.c.\
5D CA C9 26 59 8F 3F FF 0B E8 4B FA D3 09 02 00 ]..&Y.?...K....
00
t=2110 [st=2110] URL_REQUEST_JOB_FILTERED_BYTES_READ
--> byte_count = 521
--> hex_encoded_bytes =
3C 21 44 4F 43 54 59 50 45 20 68 74 6D 6C 3E 0A <!DOCTYPE html>.
3C 68 74 6D 6C 20 6C 61 6E 67 3D 22 65 6E 22 3E <html lang="en">
0A 3C 68 65 61 64 3E 0A 20 20 20 20 3C 6D 65 74 .<head>. <met
61 20 63 68 61 72 73 65 74 3D 22 55 54 46 2D 38 a charset="UTF-8
...
Adding Internals>Network.
,
Jul 2
Strange how the waterfall is reporting TTFB at around 100ms and CURL is definitely outputting the headers ASAP.
,
Jul 6
@yhirano: Seems that this could be a dupe of Issue 2016?
,
Jul 9
> #8 If that were the case we would see the bytes in the net log sooner I think.
,
Jul 9
@Comment 9: Which bytes would you expect? Looking at this closer, we see HTTP2_SESSION_RECV_HEADERS at t=109 in Event 3270, which maps to the SSL_SOCKET_BYTES_RECEIVED at t=109 in event 3268. It then receives 10 bytes of data (HTTP2_SESSION_RECV_DATA) at t=109, and updates the windows appropriately. However, it isn't until the fin=true at t=2110 in event 3270 (after the 289 bytes received) that we unblock and send the next request, at t=2122. That's what made me think of Issue 2016 - we have headers, but not enough of the body yet. When would you have expected to see the bytes?
,
Jul 25
Note that the log shows that the link header is only read after the entire body is received. If MIME sniffing were the issue here, the MIME sniffer would read the link header as soon as it was received, but not send it to the renderer, waiting for enough data to sniff the content-type, so this is not MIME sniffing. Looking at the URLRequest layer, we only get any of the compressed body bytes (Other than the gzip body) when the request completes. Looking at the SSL layer, it also looks like we only get any of the compressed body out of the SSL layer. So the issue is between the SSL layer and the server, inclusive. While I can't mapped the encrypted bytes received over the network to the body bytes, just comparing each read from the network to each read of decrypted bytes makes me think the body is not being held up at the SSL layer, so the body bytes all seem to be delayed before they reach chrome's network stack. [unknown9595]: Are you sure that your server isn't holding back the body at the gzip layer? It could be the server's gzip code holds back data until it had enough for full gzip window (Since doing otherwise reduces the compression ratio).
,
Jul 26
Its a standard Express/Compression set up, when I curl the response, I set the preload tags > delay > body, so the server is sending data as expected.
,
Aug 24
I agree with mmenke that it's unlikely to be issue #2016, so we can clear the Needs-Feedback from yhirano. I also agree with mmenke that this is what's going on. You can even see this in curl. If you run curl https://dynamic-diploma.glitch.me/ then it behaves as you expect. The pause is where you placed it in the code. But that's not doing gzip. curl, by default, doesn't send an Accept-Encoding header. Instead you want: curl --compressed https://dynamic-diploma.glitch.me/ THEN you see something closer to what Chrome's seeing, which is that the gzip filter on the server is waiting until it's gotten a larger chunk of data. You could flush the gzip stream at that point if that's what you want, though note this is an interactivity vs. compression trade-off. Node appears to provide a low-level zlib.flush function. https://nodejs.org/api/zlib.html#zlib_zlib_flush_kind_callback This compression middleware you're using does appear to call it: https://github.com/expressjs/compression/blob/master/index.js#L70 How you use that, I don't know. I see you are calling some method named "flushHeaders" but this data is part of the body, not the headers. (Headers are HTTP headers, not the <head> block of your HTML.) Either way, I think we can close this as it's solidly a server-side bug. |
|||||||
►
Sign in to add a comment |
|||||||
Comment 1 by viswa.karala@chromium.org
, Jun 28 2018