New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 68 users

Issue metadata

Status: Fixed
Last visit > 30 days ago
Closed: Jun 2016
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Launch-OWP
Launch-Accessibility: NA
Launch-Exp-Leadership: ----
Launch-Leadership: ----
Launch-Legal: Yes
Launch-M-Approved: ----
Launch-M-Target: ----
Launch-Privacy: Yes
Launch-Security: Yes
Launch-Test: Yes
Launch-UI: NA

Blocked on:
issue 472009

Sign in to add a comment

Issue 452335: Support Brotli as a content-encoding method on HTTPS connections

Reported by, Jan 27 2015 Project Member

Issue description

Brotli is used in WOFF 2.0 web fonts with great success.

This launch-owp bug is about making it available as an HTT transfer-encoding method (e.g. Accept-Encoding: Brotli).

Advantages of Brotli over gzip:
 - significantly better compression density
 - comparable decompression speed

Disadvantages of Brotli over gzip:
 - significantly slower compression speed making it less useful for serving dynamic assets (note: balancing compression density and speed is possible).

Advantages of Brotli over LGMA:
 - comparable compression density
 - significantly faster decompression speed
 - has a spec on Standards track

Some numbers:
method  decompression speed, compression density
gzip    370.15 MB/s          4.745x
brotli  348.21 MB/s          6.264x
lzma    120.59 MB/s          7.309x

Comment 1 by, Jan 27 2015

Have other browser vendors expressed an interest in supporting brotli?

Comment 2 by, Jan 27 2015

Labels: OWP-Design-No OWP-Standards-UnofficialSpec OWP-Type-ChangeBehavior
mmenke@: yes, Mozilla is looking into it see [1].

=== Missing bits for the OWP-Launch bug===
Public standards discussion: 
 Draft IETF spec:
 In ISE Review.

Support in other browsers:
Internet Explorer: no signals
Firefox: showing interest[1], WOFF 2.0 (using Brotli) behind a flag
Safari: no signals


Comment 3 by, Jan 27 2015

Labels: -Cr-Internals-Network Cr-Internals-Network-Filters

Comment 4 by, Jan 27 2015

One other question:  How much memory does it need per stream?  Skimming over the specs, sounds like the sliding window can be up to 16 MB?  Seems like that could be a bit much for mobile.

Comment 5 by, Jan 27 2015

Thanks for the interest.

Here is what I got from the compression team (cc-ed):

"If the decoding client keeps a copy of the bytestream anyways, no other 'sliding window' is needed.

The size of the ringbuffer is configurable from 64 kB to 16 MB, but there is never a reason to make it (much) larger than the data being transferred. If you are sending 100 kB, then a 128 kB ringbuffer is more than enough. For a good compromise on current computers and mobile phones, I'm expecting a 1 MB ringbuffer to be used typically as a maximum. The 16 MB can be more useful for archiving use.

In a streaming compression (where the decoding client doesn't keep a copy), there is an extra need of 64 kB to 16 MB of memory to keep the ringbuffer. I guess we could make a recommendation that if it is a streaming connection between the server and the client, the server should limit the sliding window to 2 MB or so."

We could instrument these opportunities via UMA.

Comment 6 by, Jan 27 2015

How much more could Brotli compress if we seed the client with content-specific dictionaries (for HTML, CSS, JS and maybe SVG)?  I was planning on running some tests and experiments but maybe someone already has.

For example, with CSS content, could we build a dictionary seed that has all of the css properties or at least a subset from scanning a relatively large corpus? (I use "dictionary" loosely - compression context, initial window, whatever).  Could we get an additional 10%+ on all CSS by adding 10's of KB to the client (and the same for the other common text content)?

It could fall back to straight Brotli if it's not one of the known content types and it might be worth versioning the dictionaries (call it Brot15 or something).

Given that the vast majority of the text resources on the web are JS, CSS and HTML it would be great if we could leverage that to get even better compression (and they are all render-blocking so bytes saved there are way more important than image bytes).

Comment 7 by, Jan 27 2015

I'm a bit confused: skimming the spec, I don't a reference to a defined, shared-everywhere static dictionary, but not to content-specific dictionaries.  I didn't read it carefully, so I'm sure I missed something, but where's the protocol reference to content-specific dictionaries?

I'm concerned about the combined memory pressure of static dictionaries from SDCH and Brotli, though we could manage that storage so that they interfered with each other rather than with the rest of system memory.  I'm also concerned by the memory pressure implied in c#5; at least in Chrome's current implementation, pretty much all our decoding is streaming (we keep an on-disk copy in cache, which referencing might induce latency problems, and the main data is streamed to a separate process in the renderer, and hence isn't available to browser operations such as decoding).

Comment 8 by, Feb 3 2015

[+cbentzel]:  FYI

Comment 9 by, Feb 3 2015

This is listed as started. Who has started it?

Comment 10 by, Feb 3 2015


Comment 11 by, Feb 3 2015

Looking at the numbers as well as FF bug, it seems like if anything we should be targeting LZMA rather than Brotli. Network would be bottleneck for 120MB/s decompress speed for LZMA and data savings are higher.

Comment 12 by, Feb 3 2015

I guess those numbers may be on a high-end machine. Would love to see numbers on a lower-end ARM core to see impact.

Also, I'd like to restrict any potential Accept-Encoding schemes to https only to reduce headaches associated with intermediaries. SDCH is littered with workaround hacks to deal with them, and it would be great to remove the need for those on new encoding schemes.

Comment 13 by, Feb 3 2015

And when there was a bzip2 experiment, it was reverted due to that issue as well.  See  issue 14801 .

Comment 14 by, Feb 4 2015

The numbers for html are better for brotli than those for LZMA, 2-8 % smaller for brotli. The numbers in FF bug are huge asm.js files, and LZMA gets more accurate with its statistics for larger data (but will have bigger working set, too).

Comment 15 by, Feb 4 2015

Here are the benchmark results for two more corpora, the Canterbury corpus and a corpus of ~1000 html files sampled from fetchlogs.

In the table below brotli(6) is the fast implementation of brotli that compresses with roughly the same speed as gzip with quality setting 6, and brotli(11) is a 50-100x slower implementation that gives 10-15% more compression density.

corpus      method      decompression speed, compression density
fetchlogs   gzip(6)     380 MB/s             5.477x
fetchlogs   brotli(6)   315 MB/s             5.781x
fetchlogs   brotli(11)  290 MB/s             6.704x
fetchlogs   lzma         90 MB/s             6.263x
canterbury  gzip(6)     310 MB/s             3.845x
canterbury  brotli(6)   255 MB/s             5.026x
canterbury  brotli(11)  240 MB/s             5.502x
canterbury  lzma         80 MB/s             5.694x

Comment 16 by, Feb 4 2015

Is the current brotli(11) code equivalent to the brotli code on Google's github page for it?  If so, have you tried running it against a corpus of css and js files in addition to html?  How does gzip 9 compare time-wise to brotli(11)?

I took the code on github and ran it against the js and css from the alexa top-300k (basically the pages the http archive fetches) and compared it to gzip setting 9 and I saw 12.2% savings on css (1.2M files) and 9.0% savings on js (2.7M files).

Given the fairly standard attribute and value sets in css (and to some extent js because of DOM methods) I was hopeful that we could improve that further using different dictionaries over the html-based one that is baked into brotli right now.

Comment 17 by, Feb 10 2015

I took a brief look at the Brotli decoder code on github.

2 main concerns (beyond compression rate/speed);
  - Right now it doesn't look to be written in a streaming fashion. This will need to change for it to be used as an HTTP decoder.

  - It looks like there's a minimum of ~100KB of context - 64KB window, ~20KB for huffman tables, ~10KB for bit reader, and probably a bit more once this is in a streaming fashion and has to replace stack with heap. Is there anything which can be done to reduce that?

Comment 18 by, Feb 17 2015

#9: it's started in the sense that this is the OWP-launch tracking bug (not the implementation bug) and I've self assigned myself to drive consensus and adoption.

#11 re targeting LZMA, one more downside on top of what was mentioned in #14: we had initially thought of using LZMA in WOFF2 but had to give up due for standardization reasons (there isn't a clear spec for it and no commitment in having one written).

I'll ping the compression team about the extra questions.

Comment 19 by, Feb 17 2015

#17: The decoder is not streaming in the sense of holding a state outside the stack, and the user being able to call it repeatedly to get the data available so far -- but it has already another way of streaming, a function pointer gets called every 64 kB or so of output. It requires the use of threads to keep the context around. If you really need this, we will fix it soon. There will be little to no increase of working set because of this.

#17: Similarly, the Deflate64 changes the deflate working set to similar size (using a 64 kB window), and gives both a faster decompression and compression speeds. The working set increases, but overall we get faster compression: there are more copies which are faster to decode than true entropy coded literals. Any window size smaller than 64 kB tends to be always both slower and less dense than 64+ kB.

#17: It might be possible to use a smaller window. One can read first all Huffman codes that are in the beginning of the block, and then it is possible to compute the maximum length of a backward reference. The differential distances (+1 +2 previous) can make this a bit tricky and might require spec changes. Possibly not worth it.

Comment 20 by, Feb 17 2015

Chrome's networking stack is a single thread event loop. To prevent arbitrary data from being buffered in memory, and to get data to consumers as fast as possible, this will need to be rewritten in a way for the caller to call it repeatedly to get the data out of it.

Comment 21 by, Feb 17 2015

#20: Makes sense. We discussed this here, and Lode Vandevenne (from LodePNG, Zopfli and ZopfliPNG fame) will modify the decoder to fit your requirements. He also promised to work on improving the compression ratio on levels 2-9 to be lot closer to 11. The current version is only optimized for compression levels 1 and 11, with 100+ MB/s in level 1, and extremely slow at level 11. I think we have only opensourced the level 11 encoder so far, mostly because it seemed the only thing that was relevant for fonts.

Comment 22 by, Feb 25 2015

Sounds like we have a plan then :)

Filed for #20/#21.
Does Lode Vandevenne have a github account?

Comment 23 Deleted

Comment 24 by, Mar 4 2015

#22: I started working on the brotli decoder with streamable output now. I have a github account with username "lvandeve".

-Lode Vandevenne

Comment 25 by, Mar 5 2015

Lode: thanks, I've added you to the Brotli team on github.

Comment 26 by, Mar 5 2015

OK. If possible when adding this I'd like to make it a compile-time option
(that would be on-by-default for Chrome) simply to measure binary size
impact for the cronet target. We already link in the library for Chrome due
to WOFF so no additional overhead there. If this is too much of a pain we
can always run binary size analysis tools to measure impact.

Comment 27 by, Mar 6 2015

#26: It will be very exciting to add Brotli support to Chrome, because it will make the web faster, especially on mobile. I'm making progress with the streaming decoder. Both Jyrki and me will provide support with the integration of Brotli into Chromium, for example if API or performance needs arise, or the need for a context-specific dictionary.

Comment 28 by, Mar 6 2015


Comment 29 by, Mar 13 2015


Comment 30 by, Mar 13 2015

cbentzel@ #26: I'm happy to help with the binary size analysis using our Binary Size Tool:

I did a recent analysis at the end of February and it looks like symbols matching ".*rotli.*" total up to 126K, of which 120K is the kBrotliDictionary, so it's pretty compact right now with a total of 6k of actual code.

Comment 31 by, Mar 13 2015

120K of data may be a hard sell for cronet variant, so I'd like this to be controlled by a compile-time/GYP/GN flag if possible.

Comment 32 by, Mar 16 2015

#31, Woff2 fonts are significantly smaller and savings with the fonts can be more than 100x the size of the brotli dictionary. Turning woff2 and brotli off would save on the binary size (by 126 kB), but the bill would be paid elsewhere in the system by delivering the fonts in a less efficient format. I don't know if the cronet is always used with fonts. I'd like to assume that even the simplest messaging device needs to have a complex font infrastructure nowadays to be able to show messages in Thai, Russian, Chinese, Arabic etc.

Comment 33 by, Mar 17 2015

I believe that cronet is a network stack centric library. I imagine that web fonts would be for a different layer and out of scope for cronet. 

As for brotli per say, I guess it really depends on who is going to use cronet. If it turns out that they would very much like the ability to use brotli then I imagine that we would change the compile-time flag.

Comment 34 by, Mar 20 2015

Follow up on #27: I'm happy to announce that I just submitted a patch to the brotli github repository adding stackless partial streaming support to the decoder.

There is no measurable performance regression. I'm willing to help with anything to integrate this into Chromium, such as testing, support, ... Any questions or issues, please let me know.

Maybe this could even make it into Chromium 43.

Comment 35 by, Mar 31 2015

Blockedon: chromium:472009

Comment 36 by, Jul 7 2015

Do you guys have any updates on the progress of this bug? Doing some ad-hoc tests on the Javascript and CSS at Facebook we see some substantial improvements. Brotli would save about 17% of CSS bytes and 20% of javascript bytes (compared with zopfli). Resource download times are a big bottleneck, so this would make a big impact for page load times.

Comment 37 by, Jul 7 2015

Thanks Ben: data and developer interest really helps.

Current status:
 - The network team was considering restructuring the Network-Filters code as a pre-requisite for Brotli ( issue 488668 ). I've asked the team for an update on this and a wether or not we should revise the decision.
 - From experiments, we initially found out that Brotli decompression speed on Android was relatively slower than on other platforms. The compression team has made several improvements. I'm trying to get hold of the latest.

Comment 38 by, Jul 7 2015

Summary: Support Brotli as a content-encoding method on HTTPS connections (was: Support Brotli as an HTTP transfer-encoding method)
Updated summary per discussion on  issue 472009  re: transfer-encoding vs. content-encoding
Tentatively limiting this to HTTPS for a start per comment #12.

Comment 39 by, Sep 16 2015

Decoding memory use of gzip and brotli comparison for a 114 kB webpage (with a 128 kB buffer):

gzip:1 decompress: Input size: 25503 (24.90K), Peak memory excluding i/o size: 170992 (166.98K)
gzip:9 decompress: Input size: 21777 (21.26K), Peak memory excluding i/o size: 170992 (166.98K)
brotli:1:22 decompress: Input size: 22650 (22.11K), Peak memory excluding i/o size: 171093 (167.08K)
brotli:3:22 decompress: Input size: 21876 (21.36K), Peak memory excluding i/o size: 171093 (167.08K)
brotli:6:22 decompress: Input size: 19698 (19.23K), Peak memory excluding i/o size: 210260 (205.33K)
brotli:9:22 decompress: Input size: 19541 (19.08K), Peak memory excluding i/o size: 210260 (205.33K)
brotli:11:22 decompress: Input size: 17605 (17.19K), Peak memory excluding i/o size: 223297 (218.06K)

The brotli encoder limits the actual window size to just fit the data. Because of this brotli decoder does not allocate much more memory to decompress a small file than what gzip allocates. Bzip2, lzma and lzham allocated more (581.26K to 4.15M) than gzip and brotli for this file.

The idea of the memory allocation comparison feels somewhat artificial to me and should be only considered as an slight indication of the multi-processing performance. I think the actual multi-processing performance can be impacted more by memory access patterns than just by allocation sizes.

Comment 40 by, Sep 16 2015

The concern with memory usage isn't performance, it's that memory is a finite resource, and if you run out, something has to go.

Comment 41 by, Sep 17 2015

Since brotli improved a lot in the last half year, I re-ran my benchmark from #15.

The current measurement is done on a different machine, so the absolute numbers for decoding speed are not comparable, but the newest version of brotli:6 decodes 7% faster than zlib:6 instead of 17% slower half a year ago, while the compression density increased by 6.4% on the html corpus. [Note that gzip(6) in #15 was in fact zlib.]

corpus      method      decompression speed, compression density
fetchlogs   zlib:6      440 MB/s             5.477x
fetchlogs   brotli:6    470 MB/s             6.153x
fetchlogs   brotli:11   405 MB/s             6.938x
fetchlogs   lzma:9       97 MB/s             6.263x
canterbury  zlib:6      327 MB/s             3.845x
canterbury  brotli:6    360 MB/s             5.061x
canterbury  brotli:11   318 MB/s             5.636x
canterbury  lzma:9       87 MB/s             5.694x

Comment 42 by, Sep 18 2015


Comment 43 by, Sep 23 2015


Comment 44 by, Sep 23 2015

Labels: Hotlist-Recharge
This issue likely requires triage.  The current issue owner maybe inactive (i.e. hasn't fixed an issue in the last 30 days).  Thanks for helping out!


Comment 45 by, Oct 7 2015

Labels: -Hotlist-Recharge M-48
Note: we are settling on a shorter name for the Accept-Encoding value: "br".

Comment 46 by, Jan 12 2016

Labels: DevRel-Facebook

Comment 47 by, Jan 13 2016

Labels: -M-48 M-49
Current plan: experiment in M49, monitor key metrics and if no issues then pursue experiment on stable. Eventually, enable at 100% either in M49 or the following milestone.

Remaining items: merge latest brotli decoder (probably already done), add UMA for measuring memory.

Comment 48 by, Jan 13 2016

Should this be planned M-50 launch with experiments in M-49 timeframe?

Comment 49 by, Jan 13 2016

At least for Facebook we'd like to see this launched in the M49 timeframe. Launching with experiments creates a chicken and egg problem -- sites are not incentivized to launch brotli if the user population isn't large. Sites are unlikely to implement this quickly overnight and this will provide a natural slow rollout of the feature. Large sites such as Facebook & Google will probably not flip a switch overnight (as we'll want to make sure we aren't causing reliability issues and measure the perf impact of doing this).

Comment 50 by, Jan 15 2016

Intent to ship sent:

Launch plan wise:
I would like to proceed with a M49 gradual launch:
 - smoke test on Canary/Dev at 100%
 - gradual increase on Beta
 - gradula increase on Stable, ideally up to fully enabled

 - Brotli has been deployed in WOFF2 for a while with no issues
 - Firefox is already shipping Brotli (as well as WOFF2)
 - our and their fuzzers have been running for a while with no new issue found.

I will discuss internally with our leads.

Note: The launch plan is independent of the LGTMs on the intent to ship.

Comment 51 by, Feb 17 2016

Is this still planned for M49?
In Canary 50 with chrome://flags/#enable-brotli set to default, brotli is not enabled for me.

Comment 52 Deleted

Comment 53 by, Feb 17 2016

Re #51-- how did you test? Keep in mind that Brotli is only advertised over secure (HTTPS) connections. 

HTTPS:// is served encoded by Brotli.

Comment 54 by, Feb 17 2016

Re #53: You can go to any HTTPS page and observe that the request header `Accept-Encoding` does not advertise `br`.
I checked that in the Network panel of the devtools of course.
Random HTTPS page example:
A page that supports brotli compression:

Comment 55 by, Feb 25 2016

Do we expect this to land before the M50 branch point? A few partners have been asking about it. Thanks!

Comment 56 by, Feb 25 2016

Clarification on this: 'Accept-Encoding: br' is included in Canary/Dev v49/v50. 

When the feature availability is set to 'default' in chrome://flags, its availability is controlled by the experimental ratio while the team looks for any regressions. If no problems are found, the ratio will eventually reach 100% and the feature will subsequently be ramped up in Beta/Release builds.

Comment 57 by, Feb 25 2016

The chrome://flags control for this is also tri-state and early testers can
set to enabled if they want to play with it.

On Thu, Feb 25, 2016 at 9:03 AM via Monorail <> wrote:

Comment 58 by, Mar 1 2016

Labels: Launch-Accessibility-NA Launch-Legal-Yes Launch-Privacy-Yes Launch-Security-Yes Launch-UI-NA

Comment 59 by, Mar 7 2016

Labels: Launch-Status-Requested Launch-Test-Yes

Comment 60 by, Mar 15 2016

Labels: -M-49 M-50

Comment 61 by, Mar 16 2016

Labels: -Launch-Status-Requested Launch-Status-Approved-Beta

Comment 62 by, Apr 11 2016

Labels: -Launch-Status-Approved-Beta Launch-Status-Approval-Requested
Data looks good (success rate, compression ratio as expected).
Let's ask for permission to pursue the fractional rollout to M50 stable.

Comment 63 by, Apr 12 2016

Labels: -Launch-Status-Approval-Requested Launch-Status-Approved-Stable-Exp
Approved for an experiment over email.

Comment 64 by, Apr 13 2016

To start Finch experiment "Type: Launch" is required.

Comment 65 by, Apr 13 2016

Labels: -Type-Launch-OWP Type-Launch
Sad that it doesn't work with Launch-OWP. Switching.

Comment 66 by, Apr 13 2016

Labels: -Type-Launch Type-Launch-OWP
Side effect...

Comment 67 by, Apr 13 2016

Please use this one for the experiment:

Comment 68 by, May 6 2016

Folks, what's the status of this? What was the side effect and has it been resolved?

Comment 69 by, May 9 2016

It's partially enabled for a fraction of our users across channels including Stable (Chrome 50). Users with Brotli enabled will advertise it via the Accept-Encoding: br HTTP header. 

The data looks good so we will increase the population and fully launch (assuming no issues).

Comment 70 by, May 9 2016

Oh, re the "side effect" comment in #66, it was about an issue with not Brotli.

Comment 71 by, May 9 2016

Labels: -OWP-Design-No OWP-Design-Yes

Comment 72 by, May 9 2016

Sounds great, can't wait! Thanks.

Comment 73 by, May 9 2016

One thing I noticed is that this is not enabled by default in either trunk or the release branches. That should ideally change.

Comment 74 by, May 9 2016


Comment 75 by, May 9 2016

CL enabling "brotli-encoding" by default:

Comment 76 by, May 9 2016

Project Member
The following revision refers to this bug:

commit e3118678848807c0ea60ab086e3d803cb1913098
Author: eustas <>
Date: Mon May 09 18:24:11 2016

Change "brotli-encoding" feature to be enabled by default.

BUG= 452335 

Cr-Commit-Position: refs/heads/master@{#392369}


Comment 77 by, May 18 2016

Labels: Merge-Request-51

Comment 78 by, May 18 2016

The change has successfully landed. No problems so far, UMAs are good.
No risks: the feature would be disabled via Finch in case of emergency.

Comment 79 by, May 18 2016

Labels: -Merge-Request-51 Merge-Review-51 Hotlist-Merge-Review
[Automated comment] Less than 2 weeks to go before stable on M51, manual review required.

Comment 80 by, May 18 2016

The only merge needed is the patch in 76 ( that flips the feature to on-by-default in the release branch.

Comment 81 by, May 18 2016

Before we approve merge to M51, Could you please confirm whether CL listed #76 is baked/verified in Canary and safe to merge?

Comment 82 by, May 19 2016

Affirmative. Everything is OK. UMA's are good and only indicate the growth of feature usage.

Comment 83 by, May 19 2016

Labels: -Merge-Review-51 Merge-Approved-51
Merge approved for M51 (branch 2704). Please merge in your change ASAP, as we are getting close to cutting a stable candidate.

Comment 84 by, May 19 2016

Comment 85 by, May 20 2016

Project Member
Labels: -merge-approved-51 merge-merged-2704
The following revision refers to this bug:

commit 52b672b2462d6a3751a13187a1003e6fffdfbfbd
Author: eustas <>
Date: Fri May 20 16:15:51 2016

Change "brotli-encoding" feature to be enabled by default.

BUG= 452335 

Cr-Commit-Position: refs/heads/master@{#392369}
(cherry picked from commit e3118678848807c0ea60ab086e3d803cb1913098)

Cr-Commit-Position: refs/branch-heads/2704@{#616}
Cr-Branched-From: 6e53600def8f60d8c632fadc70d7c1939ccea347-refs/heads/master@{#386251}


Comment 86 by, May 24 2016

I have a question. What does intermediates refer to in the reason to restrict this feature to HTTPS only? Caches? If so, this may or may not be solved by adding no-transform to Cache-Control when you send Content-Encoding: br. I don't know.

I think that restricting to HTTPS is regrettable, as what you save with brotli you lose double by not having caching.

Comment 87 by, May 24 2016

Intermediaries (or "middle boxes") refers to companies/infra/software meddling with the data transfer between you (the user) and the webserver.

One example from SDCH that was mentioned to me (the name of the company is not relevant to the discussion so I'm hiding it):

"The most extreme case of middle box was Company AcmeTelecom (fictitious name but true story), that tried to make things better when faced with unknown content encodings, by doing the following things:

a) Remove the unrecognized content encoding
b) Pass the (already compressed!) content through another gzip encoding pass
c) Claim that the content was merely encoded as gzip"

Comment 88 by, Jun 22 2016

Labels: -Launch-Status-Approved-Stable-Exp Launch-Status-Approved
Status: Fixed (was: Started)
Looks like we are done for the launch part :)

Comment 89 by, Dec 20

It's been 2.5 years since this feature shipped, is the problem with the intermediate boxes still present? It'd probably be worth re-evaluating this on a semi-regular basis to see if it's still a problem, and if not, enabling br support on http.

Comment 90 by, Dec 20

I don't think there's any reason to expect middle boxes to have been fixed - we aren't sending unencrypted brotli to them, and they aren't seeing brotli, so there's no forcing function to get them fixed.

Sign in to add a comment