Project: chromium Issues People Development process History Sign in
New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Issue 452335 Support Brotli as a content-encoding method on HTTPS connections
Starred by 70 users Project Member Reported by, Jan 27 2015 Back to list
Status: Fixed
Closed: Jun 2016
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Launch-OWP
Launch-Accessibility: NA
Launch-Legal: Yes
Launch-M-Approved: ----
Launch-M-Target: ----
Launch-Privacy: Yes
Launch-Security: Yes
Launch-Status: Approved
Launch-Test: Yes
Launch-UI: NA

Blocked on:
issue 472009

Sign in to add a comment
Brotli is used in WOFF 2.0 web fonts with great success.

This launch-owp bug is about making it available as an HTT transfer-encoding method (e.g. Accept-Encoding: Brotli).

Advantages of Brotli over gzip:
 - significantly better compression density
 - comparable decompression speed

Disadvantages of Brotli over gzip:
 - significantly slower compression speed making it less useful for serving dynamic assets (note: balancing compression density and speed is possible).

Advantages of Brotli over LGMA:
 - comparable compression density
 - significantly faster decompression speed
 - has a spec on Standards track

Some numbers:
method  decompression speed, compression density
gzip    370.15 MB/s          4.745x
brotli  348.21 MB/s          6.264x
lzma    120.59 MB/s          7.309x

Comment 1 by, Jan 27 2015
Have other browser vendors expressed an interest in supporting brotli?
Labels: OWP-Design-No OWP-Standards-UnofficialSpec OWP-Type-ChangeBehavior
mmenke@: yes, Mozilla is looking into it see [1].

=== Missing bits for the OWP-Launch bug===
Public standards discussion: 
 Draft IETF spec:
 In ISE Review.

Support in other browsers:
Internet Explorer: no signals
Firefox: showing interest[1], WOFF 2.0 (using Brotli) behind a flag
Safari: no signals

Comment 3 by, Jan 27 2015
Labels: -Cr-Internals-Network Cr-Internals-Network-Filters
Comment 4 by, Jan 27 2015
One other question:  How much memory does it need per stream?  Skimming over the specs, sounds like the sliding window can be up to 16 MB?  Seems like that could be a bit much for mobile.
Thanks for the interest.

Here is what I got from the compression team (cc-ed):

"If the decoding client keeps a copy of the bytestream anyways, no other 'sliding window' is needed.

The size of the ringbuffer is configurable from 64 kB to 16 MB, but there is never a reason to make it (much) larger than the data being transferred. If you are sending 100 kB, then a 128 kB ringbuffer is more than enough. For a good compromise on current computers and mobile phones, I'm expecting a 1 MB ringbuffer to be used typically as a maximum. The 16 MB can be more useful for archiving use.

In a streaming compression (where the decoding client doesn't keep a copy), there is an extra need of 64 kB to 16 MB of memory to keep the ringbuffer. I guess we could make a recommendation that if it is a streaming connection between the server and the client, the server should limit the sliding window to 2 MB or so."

We could instrument these opportunities via UMA.
How much more could Brotli compress if we seed the client with content-specific dictionaries (for HTML, CSS, JS and maybe SVG)?  I was planning on running some tests and experiments but maybe someone already has.

For example, with CSS content, could we build a dictionary seed that has all of the css properties or at least a subset from scanning a relatively large corpus? (I use "dictionary" loosely - compression context, initial window, whatever).  Could we get an additional 10%+ on all CSS by adding 10's of KB to the client (and the same for the other common text content)?

It could fall back to straight Brotli if it's not one of the known content types and it might be worth versioning the dictionaries (call it Brot15 or something).

Given that the vast majority of the text resources on the web are JS, CSS and HTML it would be great if we could leverage that to get even better compression (and they are all render-blocking so bytes saved there are way more important than image bytes).
I'm a bit confused: skimming the spec, I don't a reference to a defined, shared-everywhere static dictionary, but not to content-specific dictionaries.  I didn't read it carefully, so I'm sure I missed something, but where's the protocol reference to content-specific dictionaries?

I'm concerned about the combined memory pressure of static dictionaries from SDCH and Brotli, though we could manage that storage so that they interfered with each other rather than with the rest of system memory.  I'm also concerned by the memory pressure implied in c#5; at least in Chrome's current implementation, pretty much all our decoding is streaming (we keep an on-disk copy in cache, which referencing might induce latency problems, and the main data is streamed to a separate process in the renderer, and hence isn't available to browser operations such as decoding).
[+cbentzel]:  FYI
This is listed as started. Who has started it?
Looking at the numbers as well as FF bug, it seems like if anything we should be targeting LZMA rather than Brotli. Network would be bottleneck for 120MB/s decompress speed for LZMA and data savings are higher.

I guess those numbers may be on a high-end machine. Would love to see numbers on a lower-end ARM core to see impact.

Also, I'd like to restrict any potential Accept-Encoding schemes to https only to reduce headaches associated with intermediaries. SDCH is littered with workaround hacks to deal with them, and it would be great to remove the need for those on new encoding schemes. 

And when there was a bzip2 experiment, it was reverted due to that issue as well.  See issue 14801.
The numbers for html are better for brotli than those for LZMA, 2-8 % smaller for brotli. The numbers in FF bug are huge asm.js files, and LZMA gets more accurate with its statistics for larger data (but will have bigger working set, too).
Here are the benchmark results for two more corpora, the Canterbury corpus and a corpus of ~1000 html files sampled from fetchlogs.

In the table below brotli(6) is the fast implementation of brotli that compresses with roughly the same speed as gzip with quality setting 6, and brotli(11) is a 50-100x slower implementation that gives 10-15% more compression density.

corpus      method      decompression speed, compression density
fetchlogs   gzip(6)     380 MB/s             5.477x
fetchlogs   brotli(6)   315 MB/s             5.781x
fetchlogs   brotli(11)  290 MB/s             6.704x
fetchlogs   lzma         90 MB/s             6.263x
canterbury  gzip(6)     310 MB/s             3.845x
canterbury  brotli(6)   255 MB/s             5.026x
canterbury  brotli(11)  240 MB/s             5.502x
canterbury  lzma         80 MB/s             5.694x
Is the current brotli(11) code equivalent to the brotli code on Google's github page for it?  If so, have you tried running it against a corpus of css and js files in addition to html?  How does gzip 9 compare time-wise to brotli(11)?

I took the code on github and ran it against the js and css from the alexa top-300k (basically the pages the http archive fetches) and compared it to gzip setting 9 and I saw 12.2% savings on css (1.2M files) and 9.0% savings on js (2.7M files).

Given the fairly standard attribute and value sets in css (and to some extent js because of DOM methods) I was hopeful that we could improve that further using different dictionaries over the html-based one that is baked into brotli right now.
I took a brief look at the Brotli decoder code on github.

2 main concerns (beyond compression rate/speed);
  - Right now it doesn't look to be written in a streaming fashion. This will need to change for it to be used as an HTTP decoder.

  - It looks like there's a minimum of ~100KB of context - 64KB window, ~20KB for huffman tables, ~10KB for bit reader, and probably a bit more once this is in a streaming fashion and has to replace stack with heap. Is there anything which can be done to reduce that?
#9: it's started in the sense that this is the OWP-launch tracking bug (not the implementation bug) and I've self assigned myself to drive consensus and adoption.

#11 re targeting LZMA, one more downside on top of what was mentioned in #14: we had initially thought of using LZMA in WOFF2 but had to give up due for standardization reasons (there isn't a clear spec for it and no commitment in having one written).

I'll ping the compression team about the extra questions.
#17: The decoder is not streaming in the sense of holding a state outside the stack, and the user being able to call it repeatedly to get the data available so far -- but it has already another way of streaming, a function pointer gets called every 64 kB or so of output. It requires the use of threads to keep the context around. If you really need this, we will fix it soon. There will be little to no increase of working set because of this.

#17: Similarly, the Deflate64 changes the deflate working set to similar size (using a 64 kB window), and gives both a faster decompression and compression speeds. The working set increases, but overall we get faster compression: there are more copies which are faster to decode than true entropy coded literals. Any window size smaller than 64 kB tends to be always both slower and less dense than 64+ kB.

#17: It might be possible to use a smaller window. One can read first all Huffman codes that are in the beginning of the block, and then it is possible to compute the maximum length of a backward reference. The differential distances (+1 +2 previous) can make this a bit tricky and might require spec changes. Possibly not worth it.
Chrome's networking stack is a single thread event loop. To prevent arbitrary data from being buffered in memory, and to get data to consumers as fast as possible, this will need to be rewritten in a way for the caller to call it repeatedly to get the data out of it.
#20: Makes sense. We discussed this here, and Lode Vandevenne (from LodePNG, Zopfli and ZopfliPNG fame) will modify the decoder to fit your requirements. He also promised to work on improving the compression ratio on levels 2-9 to be lot closer to 11. The current version is only optimized for compression levels 1 and 11, with 100+ MB/s in level 1, and extremely slow at level 11. I think we have only opensourced the level 11 encoder so far, mostly because it seemed the only thing that was relevant for fonts.
Sounds like we have a plan then :)

Filed for #20/#21.
Does Lode Vandevenne have a github account?
Comment 23 Deleted
Comment 24 by, Mar 4 2015
#22: I started working on the brotli decoder with streamable output now. I have a github account with username "lvandeve".

-Lode Vandevenne
Lode: thanks, I've added you to the Brotli team on github.
OK. If possible when adding this I'd like to make it a compile-time option
(that would be on-by-default for Chrome) simply to measure binary size
impact for the cronet target. We already link in the library for Chrome due
to WOFF so no additional overhead there. If this is too much of a pain we
can always run binary size analysis tools to measure impact.
Comment 27 by, Mar 6 2015
#26: It will be very exciting to add Brotli support to Chrome, because it will make the web faster, especially on mobile. I'm making progress with the streaming decoder. Both Jyrki and me will provide support with the integration of Brotli into Chromium, for example if API or performance needs arise, or the need for a context-specific dictionary.
cbentzel@ #26: I'm happy to help with the binary size analysis using our Binary Size Tool:

I did a recent analysis at the end of February and it looks like symbols matching ".*rotli.*" total up to 126K, of which 120K is the kBrotliDictionary, so it's pretty compact right now with a total of 6k of actual code.
120K of data may be a hard sell for cronet variant, so I'd like this to be controlled by a compile-time/GYP/GN flag if possible.
#31, Woff2 fonts are significantly smaller and savings with the fonts can be more than 100x the size of the brotli dictionary. Turning woff2 and brotli off would save on the binary size (by 126 kB), but the bill would be paid elsewhere in the system by delivering the fonts in a less efficient format. I don't know if the cronet is always used with fonts. I'd like to assume that even the simplest messaging device needs to have a complex font infrastructure nowadays to be able to show messages in Thai, Russian, Chinese, Arabic etc.
I believe that cronet is a network stack centric library. I imagine that web fonts would be for a different layer and out of scope for cronet. 

As for brotli per say, I guess it really depends on who is going to use cronet. If it turns out that they would very much like the ability to use brotli then I imagine that we would change the compile-time flag.
Comment 34 by, Mar 20 2015
Follow up on #27: I'm happy to announce that I just submitted a patch to the brotli github repository adding stackless partial streaming support to the decoder.

There is no measurable performance regression. I'm willing to help with anything to integrate this into Chromium, such as testing, support, ... Any questions or issues, please let me know.

Maybe this could even make it into Chromium 43.
Blockedon: chromium:472009
Do you guys have any updates on the progress of this bug? Doing some ad-hoc tests on the Javascript and CSS at Facebook we see some substantial improvements. Brotli would save about 17% of CSS bytes and 20% of javascript bytes (compared with zopfli). Resource download times are a big bottleneck, so this would make a big impact for page load times.
Thanks Ben: data and developer interest really helps.

Current status:
 - The network team was considering restructuring the Network-Filters code as a pre-requisite for Brotli (issue 488668). I've asked the team for an update on this and a wether or not we should revise the decision.
 - From experiments, we initially found out that Brotli decompression speed on Android was relatively slower than on other platforms. The compression team has made several improvements. I'm trying to get hold of the latest.

Summary: Support Brotli as a content-encoding method on HTTPS connections (was: Support Brotli as an HTTP transfer-encoding method)
Updated summary per discussion on issue 472009 re: transfer-encoding vs. content-encoding
Tentatively limiting this to HTTPS for a start per comment #12.
Comment 39 by, Sep 16 2015
Decoding memory use of gzip and brotli comparison for a 114 kB webpage (with a 128 kB buffer):

gzip:1 decompress: Input size: 25503 (24.90K), Peak memory excluding i/o size: 170992 (166.98K)
gzip:9 decompress: Input size: 21777 (21.26K), Peak memory excluding i/o size: 170992 (166.98K)
brotli:1:22 decompress: Input size: 22650 (22.11K), Peak memory excluding i/o size: 171093 (167.08K)
brotli:3:22 decompress: Input size: 21876 (21.36K), Peak memory excluding i/o size: 171093 (167.08K)
brotli:6:22 decompress: Input size: 19698 (19.23K), Peak memory excluding i/o size: 210260 (205.33K)
brotli:9:22 decompress: Input size: 19541 (19.08K), Peak memory excluding i/o size: 210260 (205.33K)
brotli:11:22 decompress: Input size: 17605 (17.19K), Peak memory excluding i/o size: 223297 (218.06K)

The brotli encoder limits the actual window size to just fit the data. Because of this brotli decoder does not allocate much more memory to decompress a small file than what gzip allocates. Bzip2, lzma and lzham allocated more (581.26K to 4.15M) than gzip and brotli for this file.

The idea of the memory allocation comparison feels somewhat artificial to me and should be only considered as an slight indication of the multi-processing performance. I think the actual multi-processing performance can be impacted more by memory access patterns than just by allocation sizes.
The concern with memory usage isn't performance, it's that memory is a finite resource, and if you run out, something has to go.
Since brotli improved a lot in the last half year, I re-ran my benchmark from #15.

The current measurement is done on a different machine, so the absolute numbers for decoding speed are not comparable, but the newest version of brotli:6 decodes 7% faster than zlib:6 instead of 17% slower half a year ago, while the compression density increased by 6.4% on the html corpus. [Note that gzip(6) in #15 was in fact zlib.]

corpus      method      decompression speed, compression density
fetchlogs   zlib:6      440 MB/s             5.477x
fetchlogs   brotli:6    470 MB/s             6.153x
fetchlogs   brotli:11   405 MB/s             6.938x
fetchlogs   lzma:9       97 MB/s             6.263x
canterbury  zlib:6      327 MB/s             3.845x
canterbury  brotli:6    360 MB/s             5.061x
canterbury  brotli:11   318 MB/s             5.636x
canterbury  lzma:9       87 MB/s             5.694x
Labels: Hotlist-Recharge
This issue likely requires triage.  The current issue owner maybe inactive (i.e. hasn't fixed an issue in the last 30 days).  Thanks for helping out!

Labels: -Hotlist-Recharge M-48
Note: we are settling on a shorter name for the Accept-Encoding value: "br".
Comment 46 by, Jan 12 2016
Labels: DevRel-Facebook
Labels: -M-48 M-49
Current plan: experiment in M49, monitor key metrics and if no issues then pursue experiment on stable. Eventually, enable at 100% either in M49 or the following milestone.

Remaining items: merge latest brotli decoder (probably already done), add UMA for measuring memory.
Should this be planned M-50 launch with experiments in M-49 timeframe?
At least for Facebook we'd like to see this launched in the M49 timeframe. Launching with experiments creates a chicken and egg problem -- sites are not incentivized to launch brotli if the user population isn't large. Sites are unlikely to implement this quickly overnight and this will provide a natural slow rollout of the feature. Large sites such as Facebook & Google will probably not flip a switch overnight (as we'll want to make sure we aren't causing reliability issues and measure the perf impact of doing this). 
Intent to ship sent:

Launch plan wise:
I would like to proceed with a M49 gradual launch:
 - smoke test on Canary/Dev at 100%
 - gradual increase on Beta
 - gradula increase on Stable, ideally up to fully enabled

 - Brotli has been deployed in WOFF2 for a while with no issues
 - Firefox is already shipping Brotli (as well as WOFF2)
 - our and their fuzzers have been running for a while with no new issue found.

I will discuss internally with our leads.

Note: The launch plan is independent of the LGTMs on the intent to ship.
Is this still planned for M49?
In Canary 50 with chrome://flags/#enable-brotli set to default, brotli is not enabled for me.
Comment 52 Deleted
Re #51-- how did you test? Keep in mind that Brotli is only advertised over secure (HTTPS) connections. 

HTTPS:// is served encoded by Brotli. 
Re #53: You can go to any HTTPS page and observe that the request header `Accept-Encoding` does not advertise `br`.
I checked that in the Network panel of the devtools of course.
Random HTTPS page example:
A page that supports brotli compression:
Do we expect this to land before the M50 branch point? A few partners have been asking about it. Thanks!
Clarification on this: 'Accept-Encoding: br' is included in Canary/Dev v49/v50. 

When the feature availability is set to 'default' in chrome://flags, its availability is controlled by the experimental ratio while the team looks for any regressions. If no problems are found, the ratio will eventually reach 100% and the feature will subsequently be ramped up in Beta/Release builds.
The chrome://flags control for this is also tri-state and early testers can
set to enabled if they want to play with it.

On Thu, Feb 25, 2016 at 9:03 AM via Monorail <> wrote:
Labels: Launch-Accessibility-NA Launch-Legal-Yes Launch-Privacy-Yes Launch-Security-Yes Launch-UI-NA
Labels: Launch-Status-Requested Launch-Test-Yes
Labels: -M-49 M-50
Comment 61 by, Mar 16 2016
Labels: -Launch-Status-Requested Launch-Status-Approved-Beta
Labels: -Launch-Status-Approved-Beta Launch-Status-Approval-Requested
Data looks good (success rate, compression ratio as expected).
Let's ask for permission to pursue the fractional rollout to M50 stable.
Comment 63 by, Apr 12 2016
Labels: -Launch-Status-Approval-Requested Launch-Status-Approved-Stable-Exp
Approved for an experiment over email.
To start Finch experiment "Type: Launch" is required.
Labels: -Type-Launch-OWP Type-Launch
Sad that it doesn't work with Launch-OWP. Switching.
Labels: -Type-Launch Type-Launch-OWP
Side effect...
Please use this one for the experiment:
Comment 68 by, May 6 2016
Folks, what's the status of this? What was the side effect and has it been resolved?
It's partially enabled for a fraction of our users across channels including Stable (Chrome 50). Users with Brotli enabled will advertise it via the Accept-Encoding: br HTTP header. 

The data looks good so we will increase the population and fully launch (assuming no issues).
Oh, re the "side effect" comment in #66, it was about an issue with not Brotli.
Labels: -OWP-Design-No OWP-Design-Yes
Comment 72 by, May 9 2016
Sounds great, can't wait! Thanks.
One thing I noticed is that this is not enabled by default in either trunk or the release branches. That should ideally change.
CL enabling "brotli-encoding" by default:
Project Member Comment 76 by, May 9 2016
The following revision refers to this bug:

commit e3118678848807c0ea60ab086e3d803cb1913098
Author: eustas <>
Date: Mon May 09 18:24:11 2016

Change "brotli-encoding" feature to be enabled by default.

BUG= 452335 

Cr-Commit-Position: refs/heads/master@{#392369}


Labels: Merge-Request-51
The change has successfully landed. No problems so far, UMAs are good.
No risks: the feature would be disabled via Finch in case of emergency.
Comment 79 by, May 18 2016
Labels: -Merge-Request-51 Merge-Review-51 Hotlist-Merge-Review
[Automated comment] Less than 2 weeks to go before stable on M51, manual review required.
The only merge needed is the patch in 76 ( that flips the feature to on-by-default in the release branch.
Before we approve merge to M51, Could you please confirm whether CL listed #76 is baked/verified in Canary and safe to merge?
Affirmative. Everything is OK. UMA's are good and only indicate the growth of feature usage.
Labels: -Merge-Review-51 Merge-Approved-51
Merge approved for M51 (branch 2704). Please merge in your change ASAP, as we are getting close to cutting a stable candidate. 
(review required)
Project Member Comment 85 by, May 20 2016
Labels: -merge-approved-51 merge-merged-2704
The following revision refers to this bug:

commit 52b672b2462d6a3751a13187a1003e6fffdfbfbd
Author: eustas <>
Date: Fri May 20 16:15:51 2016

Change "brotli-encoding" feature to be enabled by default.

BUG= 452335 

Cr-Commit-Position: refs/heads/master@{#392369}
(cherry picked from commit e3118678848807c0ea60ab086e3d803cb1913098)

Cr-Commit-Position: refs/branch-heads/2704@{#616}
Cr-Branched-From: 6e53600def8f60d8c632fadc70d7c1939ccea347-refs/heads/master@{#386251}


Comment 86 by, May 24 2016
I have a question. What does intermediates refer to in the reason to restrict this feature to HTTPS only? Caches? If so, this may or may not be solved by adding no-transform to Cache-Control when you send Content-Encoding: br. I don't know.

I think that restricting to HTTPS is regrettable, as what you save with brotli you lose double by not having caching. 
Intermediaries (or "middle boxes") refers to companies/infra/software meddling with the data transfer between you (the user) and the webserver.

One example from SDCH that was mentioned to me (the name of the company is not relevant to the discussion so I'm hiding it):

"The most extreme case of middle box was Company AcmeTelecom (fictitious name but true story), that tried to make things better when faced with unknown content encodings, by doing the following things:

a) Remove the unrecognized content encoding
b) Pass the (already compressed!) content through another gzip encoding pass
c) Claim that the content was merely encoded as gzip"
Labels: -Launch-Status-Approved-Stable-Exp Launch-Status-Approved
Status: Fixed
Looks like we are done for the launch part :)
Sign in to add a comment