New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 653723 link

Starred by 3 users

Issue metadata

Status: Fixed
Owner:
away until April
Closed: Dec 2016
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug



Sign in to add a comment

Investigate isolate-go performance regression

Project Member Reported by mar...@chromium.org, Oct 6 2016

Issue description

I high suspect that issue 644410 was caused by a regression on the ignore list filter; this caused the archival step to archive 3 times as much data as it was supposed to do.

AIs:
- diagnose the regression observed in issue 644410
- resolve it
- do a new roll of the code in src at https://chromium.googlesource.com/chromium/src/+/master/tools/luci-go
 

Comment 1 by djd@chromium.org, Oct 11 2016

Owner: djd@chromium.org
Status: Started (was: Available)
I've started to look at this (with tansell), trying to work my way through this. Initially, I'm starting by looking at the logs on the chromium.linux builder for known-good and known-bad builds (it seems pretty reliably 70-80s on the old code, and 180-200s on the "new"/bad code).

Any pointers on why you suspect the ignore list filter?

Initial observations so far seem that the total number + size of files in the hits/misses category is roughly equivalent in the two cases, but I'm still finding my way around all this.


Comment 2 by mar...@chromium.org, Oct 11 2016

The suspicion came from the fact that more data was being archived, in the range of >10Gb/build. I could be wrong, I didn't investigate.

Comment 3 by djd@chromium.org, Oct 12 2016

Cc: estaab@chromium.org

Comment 4 by djd@chromium.org, Oct 12 2016

My initial investigations of the isolate test steps from that build range are not showing any (clear) correlation between the increased latency and misses (# files or bytes).

This is based on the linux builder, which does clearly show the latency jump. I can repeat the analysis on other builders if folk think that would be relevant.
https://docs.google.com/a/chromium.org/spreadsheets/d/1rUHYNG8Xp155FQgtaSjI4B_Enh8rvGgsfirpec9HJvQ/edit?usp=sharing

Comment 5 by estaab@chromium.org, Oct 12 2016

Cc: d...@chromium.org iannucci@chromium.org tandrii@chromium.org
Components: Infra>Platform>Swarming
+Dan and Robbie who were interesting in the root cause for the recent regression and how we'll prevent 6 month deployment latency in the future.

+Andrii who has done some work with isolate-go IIRC.
I suspect that version of go compiler changed.
February roll was compiled with 1.4 against which we originally developed and tuned isolate-go. After that, 1.6.2 was eventually used to compile August roll.

IIRC, I was workign on rolling Go 1.5 in 2015, but abandoned the effort when my manual isolate-go perf tests showed slow down of ~20% or more.

Let me see if I can reproduce that...
tl;dr both old and new isolate-go binaries produce the same (hash-wise) output when run on the same input.

So, thanks to Go's awesome dependency management, even our own infra/infra/go/deps do not allow (easily) to bisect this. What is clear is the following:

the two binaries involved in this August roll that was reverted (see https://codereview.chromium.org/2256093002):
good & old: cf7c1fac12790056ac393774827a5720c7590bac
bad  & new: 2a20c5133a61d6637430b016192ebc4c0e2eae43

I took this tryjob from today [1], ran locally on my workstation, and after compilation with patch tested both isolate binaries above with the same input s.t. there there is 100% hit.

[1] https://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_asan_rel_ng/builds/242300
What's weird is that on my machine, 2a20 runs consistently faster cf7c that was reverted and uses less system time. But this doesn't take into account network upload of files, because everything is being hit.

Comment 9 by djd@chromium.org, Oct 13 2016

Cc: mcgreevy@chromium.org
@tandrii – that's what mithro and I have observed too. When running isolated on my machine (or a GCE instance) the newer/bad version runs slightly more quickly than the old version when given equivalent workloads. So, I'm still trying to find an isolated reproduction.

However, I've uploaded changes using old vs. new and am seeing the same significant regression on trybot. See https://codereview.chromium.org/2408303004/. PSs 1+3 have the new version and take ~10 minutes. PS2 (run twice) has the old and takes ~4 minutes.

FWIW, yes the new version is Go 1.6.2 and the old is Go 1.4.3 (IIRC – definitely 1.6 vs. 1.4). That might account for a little change in speeds, but definitely not a 3x like we're seeing.

Also, looking at the timings from the build/try bots, it seems the slowness is in the uploading of the blobs. AFAICT, the hashing + querying the server is slightly faster in the new version.
I dug up my previous data: https://bugs.chromium.org/p/chromium/issues/detail?id=523009#c5 and using "-isolate-server fake" I can still reproduce 3 minutes difference in runtime when uploading, albeit to same processing listening on localhost.

# New/Bad

$ /usr/bin/time ~/s/tmp/isol-bug/runner.sh ~/s/tmp/isol-bug/2a20c5133a61d6637430b016192ebc4c0e2eae43
Hits    :   409 (3.15Mib)
4090.11user 2959.27system 10:42.16elapsed 1097%CPU (0avgtext+0avgdata 30720916maxresident)k
336inputs+856outputs (0major+10673848minor)pagefaults 0swaps

# Old/Good

$ /usr/bin/time ~/s/tmp/isol-bug/runner.sh ~/s/tmp/isol-bug/cf7c1fac12790056ac393774827a5720c7590bac
Hits    :    68 (1.26Mib)
3300.92user 273.95system 7:27.72elapsed 798%CPU (0avgtext+0avgdata 35125236maxresident)k
135744inputs+856outputs (38major+7219998minor)pagefaults 0swaps


Observation while running new/bad: two kernel workers consistently show up with 50% CPU load while uploading is in progress. This doesn't happen with cf7c version.
I managed to reduce this to having just 1 GB file to upload:

# New/Bad

[found] [hashed/size/to hash] [looked up/to lookup] [uploaded/size/to upload/size]
[1] [0/0b/1] [0/0] [0/0b/0/0b] 100ms
8b0a122bf2244c206ae64fb32fc2bec618821396  test.isolate
14:32:49.224930 Looked up 2 items
14:32:49.228634 Uploaded    291b: 2a20c5133a61d6637430b016192ebc4c0e2eae43.isolated
14:36:50.732510 Uploaded 998.7Mib: blink_heap_unittests

Hits    :     0 (0b)
Misses  :     2 (998.7Mib)
Duration: 4m4.185s
236.78user 123.89system 4:04.20elapsed 147%CPU (0avgtext+0avgdata 2162524maxresident)k
0inputs+8outputs (0major+42699minor)pagefaults 0swaps


# Good/old

[found] [hashed/size/to hash] [looked up/to lookup] [uploaded/size/to upload/size]
[1] [0/0b/1] [0/0] [0/0b/0/0b] 100ms
8b0a122bf2244c206ae64fb32fc2bec618821396  test.isolate
14:36:53.481652 http://127.0.0.1:49587/fake/cloudstorage?digest=b930fa18f9ef6451abd9a2e7b14faea2697b598b
14:36:53.483031 Looked up 2 items
14:36:53.485470 Uploaded    291b: cf7c1fac12790056ac393774827a5720c7590bac.isolated
14:39:31.890934 Uploaded 998.7Mib: blink_heap_unittests

Hits    :     0 (0b)
Misses  :     2 (998.7Mib)
Duration: 2m41.133s
188.73user 21.90system 2:41.16elapsed 130%CPU (0avgtext+0avgdata 2125212maxresident)k
0inputs+8outputs (0major+48928minor)pagefaults 0swaps

The file in question is blink_heap_unittests, archived here: https://storage.cloud.google.com/chrome-dumpfiles/yirqavzhxm. I'm making a reproducible test-case so I can fill pref dashboard.
Project Member

Comment 13 by bugdroid1@chromium.org, Oct 13 2016

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/tools/build.git/+/2e8a598ede0aa709fe2b71ff14984e4e443c76bd

commit 2e8a598ede0aa709fe2b71ff14984e4e443c76bd
Author: tandrii <tandrii@chromium.org>
Date: Thu Oct 13 16:06:50 2016

Add new isolate-go perf builder to infra waterfall.

It gets triggered by infra/infra revisions only for now so as to avoid
confusing perf dashboard.

R=sergiyb@chromium.org
BUG= 653723 

Review-Url: https://codereview.chromium.org/2419813002

[modify] https://crrev.com/2e8a598ede0aa709fe2b71ff14984e4e443c76bd/masters/master.chromium.infra/master.cfg
[modify] https://crrev.com/2e8a598ede0aa709fe2b71ff14984e4e443c76bd/masters/master.chromium.infra/slaves.cfg

My CL to add perf to isolate-go: https://chromium-review.googlesource.com/#/c/398039/ 

Using that recipe's benchmark, and looking at all isolate-go binaries in gs bucket for linux, I accumulated this: https://paste.googleplex.com/4556915891765248 

There are two bumpds:

First, there are two quite slow >200s builds on 2016-03-03
I think ikely because of initial 1.6 roll: https://chromium.googlesource.com/infra/infra.git/+/5a9e95ff4f1765231d8531d26235b01ffe0ec966, which was reverted almost immediately https://chromium.googlesource.com/infra/infra.git/+/ce097040f276bcd6940895161ac9cf0f5de61fa9

Second bump and final is ~ March 28th: https://chromium.googlesource.com/infra/infra.git/+/2f1aa30c6d1d1ce9f41e9b6530e97f98f6942e2d

What can we do about it? since we can't get to 1.7, let's get back fast compress library: https://codereview.chromium.org/2422443002

Comment 15 by djd@chromium.org, Oct 13 2016

Why can't we get to 1.7? I'm missing some context here.
Much of our code must run on appengine classic, which still only supports 1.6.
(which is not to say that we CAN'T use 1.7 for this particular binary, but it would be building a binary outside of our supported toolchain, which is not a cost-free thing. maybe we could modify our toolchain to support multiple go runtimes though, and then make it so that certain binaries can opt-in to versions other than the default?)

Comment 18 by djd@chromium.org, Oct 14 2016

Cc: tansell@chromium.org

Comment 19 by djd@chromium.org, Oct 14 2016

Ah, of course. We should see 1.7 coming out for standard in O(days) now so hopefully that ameliorates this. Decoupling the version of Go for apps vs. cmds (or between binaries) does actually sounds like a reasonable step though.

I'm still trying to work out how this all fits together. Do you use one of the standalone SDKs as a dependency for running/testing apps locally? (ie. the goapp flavour of the go tool) – or only regular one?

Comment 20 by djd@chromium.org, Oct 14 2016

Thanks for the hashes from that paste (comment 14). I did rerun the try-servers from comment 9 at the two you selected.

I am seeing the bump you saw for the changes around March 28, but it's definitely not as slow as the final slowness. (Good ~4m, this ~5min, bad ~10-11m).

Do you have more data points beyond April (ie. the April-August period)? It might be that there is more than one regressing change we need to track down.
> I'm still trying to work out how this all fits together. Do you use one of the standalone SDKs as a dependency for running/testing apps locally? (ie. the goapp flavour of the go tool) – or only regular one?

We use regular Go 1.6.3 for all unit tests (including code targeting GAE). We use standalone Go GAE SDK for running stuff on GAE dev server and for deploying it to GAE (i.e. we are NOT using gcloud for GAE).

Developers aren't supposed to invoke 'goapp' directly, ever. (I think dev app server still uses it under the hood though).

(1.6.3 is defined here https://chromium.googlesource.com/infra/infra/+/master/go/bootstrap.py#54)

(Also read https://chromium.googlesource.com/infra/infra/+/master/go/README.md if you haven't yet. This environment is used by bots and supposedly by developers, though devs can use whatever they want, as long as the code passes bots).

Comment 22 by djd@chromium.org, Oct 14 2016

Do you have any pointers on how to find the hashes of old versions of stored isolate binaries? Looking at the build bot, the oldest I can find is build 3191 (~Jul 28).
No... I think once its gone from buildbot, it is lost forever. 

Well, theoretically we can enumerate all hashes in the GS bucket, filter by date, and then ran some ugly "strings <binary> | grep <something representative>" to figure out what are actually 'isolate' ELFs.

infra.git history has all the information needed to rebuild the binaries though.

Some scripts in the history no longer work on top of current World, but Go stuff should be okay.

E.g. here's what I do to build 11 month old commit (https://chromium.googlesource.com/infra/infra/+/ba4e5396236871bbb134bd441d75e36dddb99dc6):

1. Start with up-to-date infra.git gclient solution (https://chromium.googlesource.com/infra/infra/+/HEAD/doc/source.md#Checkout-code). It will install up-to-date version of GAE SDK. It's the only relevant part that isn't pinned in the history, but it's not essential for 'isolate'.

2. Checkout old revision of infra.git.
$ cd infra/infra
$ git checkout ba4e5396236871bbb134bd441d75e36dddb99dc6

3. Examine DEPS to see what revision of luci-go was pinned at the time:
$ cat DEPS | grep -A 1 luci-go
  "infra/go/src/github.com/luci/luci-go":
    ("https://chromium.googlesource.com/external/github.com/luci/luci-go"
     "@5c67f7b670b524617f13d752bd64a2540409bcaa"),

4. Checkout that version of luci-go:
$ cd go/src/github.com/luci/luci-go/
$ git checkout 5c67f7b670b524617f13d752bd64a2540409bcaa

5. Bootstrap this ancient go environ (go 1.4.2!):
$ cd go
$ eval `./env.py`
$ go version
go version go1.4.2 darwin/amd64

6. Compile 'isolate' binary:

$ cd src/github.com/luci/luci-go/client/cmd/isolate
$ go build .
(wow, this was fast on 1.4.2...)

$ ./isolate
isolate.py but faster

----------

Steps 3 and 4 are usually done by gclient itself. But some DEPS scripts aren't hermetic and their old revisions no longer work, so we can't run "gclient sync" and have to do these steps manually.

I hope it helps.
Welp, my commit message on last good roll of isolate into chromium/src is unfortunate :-/ (https://chromium.googlesource.com/chromium/src/+/792af7ebd374b815b01d2dd0a1e38fdf55c1e370)

Based on the date and commit message, I believe it corresponds to the following infra.git commit: https://chromium.googlesource.com/infra/infra/+/482f8f4805b5360644ae68e257ec489d0762c4ce

Which corresponds to https://github.com/luci/luci-go/commit/5f6fcd4035aac03c9a2c38bbcf070aa900b33911

In particular, it already uses stock zlib (https://github.com/luci/luci-go/blob/5f6fcd4035aac03c9a2c38bbcf070aa900b33911/common/isolated/algo.go), so the switch to 'compression/zlib' wasn't the regression we are looking for.
> Do you have any pointers on how to find the hashes of old versions of stored isolate binaries?

Yes, I made a small script to get some data automatically https://paste.googleplex.com/5807529703505920 (https://storage.cloud.google.com/chrome-dumpfiles/nczr6osdxi is tar.gz archive of script + listing of GS bucket).

Also https://paste.googleplex.com/4556915891765248 is updated with most recent data my script got over night.

> switch to 'compression/zlib' wasn't the regression we are looking for
Yes, it was not. I'm just saying that landing my CLs will easily get us HUGE performance boost even with go 1.6 and whatever other change it made it slow in the mean time.
Project Member

Comment 26 by bugdroid1@chromium.org, Oct 14 2016

The following revision refers to this bug:
  https://chromium.googlesource.com/infra/infra.git/+/56fd742b79eb2d08be8b485d3f184e008e07784c

commit 56fd742b79eb2d08be8b485d3f184e008e07784c
Author: Andrii Shyshkalov <tandrii@chromium.org>
Date: Fri Oct 14 09:29:45 2016

Add github.com/klauspost/compress to Go deps.

Also adds:
  github.com/klauspost/cpuid
  github.com/googleapis/gax-go
because google cloud update needs it.

R=vadimsh@chromium.org,maruel@chromium.org
BUG= 653723 

Change-Id: I51f4ab7eb232496946350d95c277fe3415bb951b
Reviewed-on: https://chromium-review.googlesource.com/397624
Commit-Queue: Andrii Shyshkalov <tandrii@chromium.org>
Reviewed-by: Vadim Shtayura <vadimsh@chromium.org>
Reviewed-by: Marc-Antoine Ruel <maruel@chromium.org>

[modify] https://crrev.com/56fd742b79eb2d08be8b485d3f184e008e07784c/go/deps.lock
[modify] https://crrev.com/56fd742b79eb2d08be8b485d3f184e008e07784c/go/deps.yaml

Project Member

Comment 27 by bugdroid1@chromium.org, Oct 14 2016

The following revision refers to this bug:
  https://chromium.googlesource.com/infra/infra.git/+/56fd742b79eb2d08be8b485d3f184e008e07784c

commit 56fd742b79eb2d08be8b485d3f184e008e07784c
Author: Andrii Shyshkalov <tandrii@chromium.org>
Date: Fri Oct 14 09:29:45 2016

Add github.com/klauspost/compress to Go deps.

Also adds:
  github.com/klauspost/cpuid
  github.com/googleapis/gax-go
because google cloud update needs it.

R=vadimsh@chromium.org,maruel@chromium.org
BUG= 653723 

Change-Id: I51f4ab7eb232496946350d95c277fe3415bb951b
Reviewed-on: https://chromium-review.googlesource.com/397624
Commit-Queue: Andrii Shyshkalov <tandrii@chromium.org>
Reviewed-by: Vadim Shtayura <vadimsh@chromium.org>
Reviewed-by: Marc-Antoine Ruel <maruel@chromium.org>

[modify] https://crrev.com/56fd742b79eb2d08be8b485d3f184e008e07784c/go/deps.lock
[modify] https://crrev.com/56fd742b79eb2d08be8b485d3f184e008e07784c/go/deps.yaml

Perf dashboard is now live under infra/luci/isolate: https://chromeperf.appspot.com/report?sid=7942548c5f500d4910fc9e0a54732aafea484344d5b49183b2abc4292e7c8752

Some data collected by running small benchmarks over revisions across last 1+ year: https://docs.google.com/a/google.com/spreadsheets/d/1jR7UOq8dxJezSIAacT0cpdtVm6iIjtn-nrcCjdhVusk/edit?usp=sharing

(note last data points on the graph were noisy due to fewer parallel jobs being run; given how heavily versions compiled with go1.6 interact with kernel, they are much more sensitive to jobs run in parallel).
Project Member

Comment 30 by bugdroid1@chromium.org, Oct 14 2016

The following revision refers to this bug:
  https://chromium.googlesource.com/external/github.com/luci/luci-go.git/+/14ac2e2b1844dada4337c300b9d68896bda9668a

commit 14ac2e2b1844dada4337c300b9d68896bda9668a
Author: tandrii <tandrii@chromium.org>
Date: Fri Oct 14 16:09:14 2016

isolate client: get back faster compression library.

On my workstation, isolate archive 1GB file with fake isolate server
is reduced from 3m 48s to 1m 9s (almost 3x).

Depends on https://chromium-review.googlesource.com/397624 to modify
Go deps in infra/infra.

BUG= 653723 
R=maruel@chromium.org

Review-Url: https://codereview.chromium.org/2422443002

[modify] https://crrev.com/14ac2e2b1844dada4337c300b9d68896bda9668a/common/isolated/algo.go

Comment 31 by d...@chromium.org, Oct 14 2016

Thoughts on using a compression algorithm that is designed to be fast? Something like: https://github.com/golang/snappy
The isolate server is in python currently, and it unpacks data to verify hash => we are stuck with whatever runs fast on Python GAE, which is zlib (implemented as C extension).

So any change to the compression scheme depends on a rewrite of isolate server.

Comment 33 by maruel@google.com, Oct 16 2016

Dan, I wrote the initial go-snappy benchmark in 2013, so yes I did investigate this option thoroughly. :)

At the time it was ~10x slower than the C++ version; so it was 10x slower than the python version, which as Vadim noted, uses the canonical C implementation.

https://github.com/golang/snappy/commit/b1b7c046128eb94a1f3c63d798f7f0a76159077e

I disagree with Vadim that changing the algo is a rewrite of the server but it's not an option until:
- There's a implementation in Go that is faster *in practice*
- There's no dependency on python implementation anymore

Comment 34 by maruel@google.com, Oct 16 2016

Timely discussion
https://groups.google.com/d/topic/golang-dev/USD-tfa1Ljg/discussion but that's for 1.8.
(By server rewrite I meant a rewrite in Go, keeping existing protocol and algos).

Comment 36 by maruel@google.com, Oct 16 2016

Ah ok, we're in agreement.
Dan has rolled luci-go in infra, and the perf shows consistent improvement, which isn't surprising of course. Time to roll it to chromium.
Screenshot from 2016-10-20 17:16:58.png
72.9 KB View Download
I aborted the roll CL https://chromiumcodereview.appspot.com/2441663002/ because on Mac builder I see clear reduction in speed, from ~1m30s to >3m. For more, see https://chromiumcodereview.appspot.com/2441663002/#msg9

Comment 39 by djd@chromium.org, Oct 21 2016

I have not been able to get clearly reproducible results from try servers so far (only targetting linux so far, admittedly) because the variability in run times is so great that it swamps the signal.
Since just forcing clobber build will not be sufficient, you can try modifying code in //base in a way that forces all binaries to be rebuilt. The executables do not have to actually work.
(FYI: HEAD of infra.git is now on go 1.7.3).
Perf dashboard doesn't show substantial changes, but my test is noisy and tests primarily single-threaded hashing + compression: https://chromeperf.appspot.com/report?sid=7942548c5f500d4910fc9e0a54732aafea484344d5b49183b2abc4292e7c8752
I found a low hanging fruit; hashes aren't deduped before the lookup stage:
https://screenshot.googleplex.com/atSy4vvF7sA

oops!

I'd recommend to do the deduping in stage3LookupLoop as a local variable. Here's some rough pseudo-code:

seen := map[string]*Item{}
...
if prev, ok := seen[item.digestItem.Digest]; ok {
  prev.link(item)
  continue
}
seen[item.digestItem.Digest] = item


https://github.com/luci/luci-go/blob/master/client/archiver/archiver.go#L511

Comment 44 by djd@chromium.org, Oct 27 2016

So, I finally managed to make a local, stable reproduction case which could demonstrate the slow down on both OS X and linux. It involved uploading to a fake server in a separate process using a real artifacts (in this case, browser tests).

Linux results (for the interesting section):
https://docs.google.com/spreadsheets/d/1rEKfU59vOGOJV1gxE-3M9h-ESK7h-YPsj6oO0KYPkiA/edit#gid=0&vpid=A2

The timing shows a single step function up at 2016-03-29T01:49:44. 

The build logs for that date are long gone, but it does coincide ~with the following commits (interesting ones marked *):

  8a52579 Roll infra/luci/ 5489e0e3f..285525e77 (33 commits).
  4a3f12c Fixed units in docstring
  950c59e Bump LUCI version.
  b95bd7a Add server-side tree listing for SoM Polymer 1.0 app.
  ed016fc [som] add application to app.yaml so it runs locally
  31d9673 Upload detect regression range module of findit for fracas.
  732aa73 chromium-cq-status: move tests to proper places.
  9b22ce3 Set category and other non cTree dependent CtFailureGroup properties on construction.
  a9bbd2b [Issue Wizard] Change all http: links to https:.
  3b07abf Update goconvey goop pin.
* e08a41d Bump luci/luci-go and luci/gae DEPS.
  f7011b1 [sheriff-o-matic] Show trooper queue on trooper page.
  d430fda CC pgervais@ on slave restart tickets
  9f215df ts_mon: don't fail on PubSub network failure.
* 2f1aa30 Switch to go1.6, roll all Go pins, unpin packages blocked on 1.6 migration.
  8b3b67f gae_ts_mon: instrument Cloud Endpoint methods

So it definitely looks like 1.6 could be the reason.

In other good news, when I ran the latest binaries built by the builder (last night) – the timings were greatly improved. They are even better than the original "good" binary, and much better than the bad one.

Comment 46 by djd@chromium.org, Oct 31 2016

I'll watch this for a couple of days to check that this new roll does not cause a similar regression in latency.

In other news, I actually re-ran the set of local tests @8b3b67f and manually changed the Go version each time (to be sure that it was the only thing changing between runs – all the other deps stayed the same).

That clearly showed that the regression came about just with the Go 1.4->1.6 change. Actually, Go 1.5 was even worse. 1.7 and tot are much better.

Comment 47 by djd@chromium.org, Nov 1 2016

I've just checked timings for the most recent isolate tests for Linux/Win/Mac builders:
https://docs.google.com/spreadsheets/d/1AeP2JzfuKhVkGmBlHGjqVnDRmNLSYmAfcAaNv_dMLpw/edit?usp=sharing

The results so far look very promising. Average timings:

Linux:  80s -> 70s
Win: 215s -> 137s
Mac: 75s -> 90s   >:(

Not sure why mac is worse in this case, that is worth looking into.
With the performance improvements on Linux and Windows and the fact we are about to do some major isolate refactoring I don't think the Mac regression is important to investigate right now.

djd@ shall we leave this for now?
Isn't that bug "fixed" in the sense that the performance is now "good enough"?

Comment 50 by djd@chromium.org, Dec 7 2016

Status: Fixed (was: Started)
Yes, I agree. Closing.

Sign in to add a comment