Net GZIP has no metrics |
|||||||
Issue descriptionsrc/net's GzipFilter has no metrics at all at this time, which is (to me) fascinating considering what we know about gzip content encoding and web page load perf [1]. The src/net BrotliFilter at least records the compression percents: https://cs.chromium.org/search/?q=BrotliFilter.CompressionPercent&sq=package:chromium&type=cs That'd be a good place to start, add the same metric to GzipFilter, and then add more Gzip metrics of interest. [1] http://googlecode.blogspot.com.au/2009/11/use-compression-to-make-web-faster.html ⛆ |
|
|
,
Mar 19 2018
Any metrics added will require monitoring and triage. This part of code (src/net/filter/gzip_source_stream.h) isn't actively being worked on. New metrics at this layer are not likely to be meaningful.
,
Mar 19 2018
Dear xunjieli Thanks for the promptly feedback, that is always helpful. Noel is right, given that the bulk of the web is served as gzipped content, why not collect data on it? The lack of activity on a given file shouldn't be a good reason to not pursue an idea (i.e. otherwise we would never optimized zlib). Plus, I see on git log that the last meaningful commit on brotli_source_stream.cc was in Dec 2016, or 1 year and 5 months ago. That hardly means we should remove the metrics on it, right? I think we should reopen this bug and fix this issue.
,
Mar 19 2018
Re-opening. I think it would be helpful to explain the context here for why we would love to have some metrics. We've been working on optimizing zlib recently. Right now, we see the performance changes via a PNG performance test. But it would be nice to also know how much this has effected gzipped content. I don't think we are too bothered about the compression percent (although, might as well track it?). I think we're more interested in seeing what the performance looks like in the wild.
,
Mar 19 2018
I am fine with adding metrics for one-off measurements. The problem with measuring src/net/filter is that any performance data and error rates are highly dependent on server experiments. We have seen this with SDCH and Brotli in the past. Loading metrics at this layer only added noise to our triagers. Brotli is slightly different as of now. Brotli/compression team is committed to maintain the code and the metrics. There are active server experiments on Brotli, and we care about memory usage on clients (e.g. compression windows advertised by servers affect client's memory usage). Last meaningful commit (src/third_party/brotli/) is less than a year ago, not to mention that the on-going standardization work in Shared Brotli.
,
Mar 20 2018
[cblume]: If no one is watching a metric, the owner of that metric get spammed bugs to remove the metrics. I don't think the network stack team currently has any interest in monitoring perf metrics in this space. You're of course welcome to add (and own) metrics here.
,
Mar 22 2018
#5 Thank you for the Brotli details. Memory use for zlib is quite small compared to Brotli IIRC, and decompression speed is about the same as zlib. However, after our recent work on zlib, its decode speed is now 2.17x (217%) faster. I'm pretty sure people are not aware of this development as yet, and some metric data about gzip performance / compression rates from our network stack might help to better inform the standardization work you mentioned. #6 If we added metrics here, zlib-team would own them, and be responsible for dealing with them and any spam. But perhaps we don't need to add metrics at all if we can alleviate our concerns ... 75% of web server responses are compressed according to the HTTPArchive. We could (potentially) land a change in zlib that regressed its decode performance, and that'd regress perf for 75% of the web responses. Are their good metrics that networking team relies upon to prevent such performance regressions? If so, what are these metrics?
,
Mar 22 2018
> 75% of web server responses are compressed according to the HTTPArchive. We could (potentially) land a change in zlib that regressed its decode performance, and that'd regress perf for 75% of the web responses. Thanks, noel@. The use case mentioned would be good for C++ perf tests. How about adding perf tests in "net_perftests"? Metrics reported by net_perftests are reported to Chrome perf dashboard (chromeperf.appspot.com) and will be monitored by the team. If you can generalize zlib's usecases in a few C++ perf tests, that would will allow us to have minimal amount of noise when it comes to detection. Another idea: If you don't like C++ perf tests, we can talk with Telemetry team to add a metric to be monitored by our loading benchmarks, which run Chrome against realworld websites (see https://chromium.googlesource.com/catapult.git/+/master/telemetry/README.md). Telemetry benchmarks are more noisy, but will allow you to exercise top sites which are currently using zlib.
,
Mar 23 2018
Ok thanks. Adding a perf test to net_perftests: filed issue 825056. For telemetry, filed issue 825061. |
||||
►
Sign in to add a comment |
|||||||
Comment 1 by noel@chromium.org
, Mar 18 2018