Some invocations of batcharchive with --exparchive are slow. |
||||
Issue description"So, I did some further debugging and I believe that I've found the cause of the slowdown: basically, isolates often reference the same file content multiple times (especially the case for batch_archive). The first time we encounter each file, we should see that it does not yet exist on the server, and upload it. Subsequent checks for the same file hash should show that the files exists, and the upload should be skipped. The bug was that we were eagerly checking multiple instances of the same file contents before we had finished uploading the first instance. These checks returned a "not on the server" result, so we queued extra uploads."
,
Dec 13 2017
IIRC, in batcharchive version, we have a set of known hashes which de-dupes files **before** checking hash existence on the server.
,
Dec 13 2017
Also, mcgreevy@ are you going to own this?
,
Dec 14 2017
The following revision refers to this bug: https://chromium.googlesource.com/infra/luci/luci-go.git/+/9fe3394061f70174c1fa9d19ad500dd4950b689c commit 9fe3394061f70174c1fa9d19ad500dd4950b689c Author: Michael McGreevy <mcgreevy@chromium.org> Date: Thu Dec 14 05:44:43 2017 isolate: Assume items exist after first check, to avoid uploading them twice. Bug: 794425 Change-Id: I87870a80bb8513f98d650bf939b04a26f88a025b Reviewed-on: https://chromium-review.googlesource.com/822555 Reviewed-by: Tim 'mithro' Ansell <tansell@chromium.org> Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> [modify] https://crrev.com/9fe3394061f70174c1fa9d19ad500dd4950b689c/client/cmd/isolate/checker.go [modify] https://crrev.com/9fe3394061f70174c1fa9d19ad500dd4950b689c/client/cmd/isolate/upload_tracker.go [modify] https://crrev.com/9fe3394061f70174c1fa9d19ad500dd4950b689c/client/cmd/isolate/upload_tracker_test.go
,
Dec 14 2017
Yes, I'm owning this. Re: "we have a set of known hashes which de-dupes files **before** checking hash existence on the server" do you mean: (a) "known hashes" are compiled into the binary, so checks for some file hashes are avoided altogether (b) the result of existence checks, once performed, are cached and used to avoid both later existence checks and uploads? If (a), I am suprised, and am unaware of this. If (b), I have done that in the exparchive cl that I just checked in.
,
Dec 14 2017
When initially evaluating my fix for uploading files multiple times, I had disabled compression of the files, which inadvertently contributed to the performance improvement. On my workstation, replicating the upload for one of the observed slow builds, I got: ~1h for exparchive ~5m for non-exparchive ~5m for exparchive with fix for duplicate uploads & compression disabled. Once I reenabled compression, the time for exparchive increased again: ~30m for exparchive with fix for duplicate uploads. So the fix for multiple uploads only gave a 2x improvement. However, I discovered another factor slowing down exparchive: we have the number of concurrent uploads set to 1, in an attempt to avoid disk thrashing. However, no other value has ever been tested. Once I set this to 8 (the ame value used by non-exparchive), I got back performance back to the same level as non-exparchive: ~5m for exparchive with fix for duplicate uploads & max-concurrent-uploads=8. I'm sending out https://crrev.com/c/826526 to make max-concurrent-uploads settable via a flag. I don't know for certain that these results will be replicated in prod, but I think these results are pretty encouraging and we should test out a max-concurrent-uploads=8 in prod ASAP. Note: there is scope to increase performance further by decreasing the level of compression (currently set to zlib level 7, but reducing to 3 is much faster). However, this requires an analysis of cost/benefit tradeoff of extra storage vs speed, so I'm not going to tackle that. Note also that it would be nice to cache the results of hashing files, but I tried this out and didn't actually see a performance improvement, so I'm not going to tackle that now either.
,
Dec 14 2017
- The value 7 was chosen a few years ago based on the C implementation used by the python client. For the Go implementation, maybe Andrii recalls. - Compression of well known precompressed extensions is disabled in the python code (level is set to 0) as these take the most of the CPU time with no benefit, this is not done on the Go code. https://cs.chromium.org/chromium/infra/luci/client/isolateserver.py?l=74 I've taken a deeper look at the code and have a few questions; - mknyszek@ is implementing isolated client (without isolate ingestion support), but because the exp_archive code is under client/cmd/isolate, it is not reusable. - I failed to see where exp archive does inputs deduplication, e.g. if the same file is listed twice as inputs, which is always happening with batched archive, I couldn't see where these are cut off short upfront, e.g. https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/client/archiver/archiver.go?q=archiver.go&dr&l=288 - Looking at duplicate hashes would work the same way via a local variable map[isolated.HexDigest]bool and it could be done inside BundlingChecker. - BundlingChecker.check() states it's running only a single invocation at a time. This is highly inefficient, since this scale near infinitely as all the actual work is done on the server. Archiver uses a default limit of 64 concurrent Contains calls. https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/client/archiver/archiver.go?q=archiver.go&dr&l=133
,
Dec 14 2017
The following revision refers to this bug: https://chromium.googlesource.com/infra/luci/luci-go.git/+/f209096fbc3f82371c730f819da08382e7973ef3 commit f209096fbc3f82371c730f819da08382e7973ef3 Author: Michael McGreevy <mcgreevy@chromium.org> Date: Thu Dec 14 22:32:47 2017 [isolate] Make number of concurent uploads configurable. In exparchive mode, the number of concurrent uploads is currently capped to 1, in an attempt to avoid disk thrashing. However, this is not necessarily the optimal setting, due to e.g. overhead in setting up HTTP connections for uploads. Initial experiments indicate that a higher number of concurrent uploads is superior. This CL adds a flag so that different values can be tested in prod. Bug: 794425 Change-Id: Id96a29ab4a7df7cd1ffe5f1d444ecce46bc5b150 Reviewed-on: https://chromium-review.googlesource.com/826526 Reviewed-by: Marc-Antoine Ruel <maruel@chromium.org> Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> [modify] https://crrev.com/f209096fbc3f82371c730f819da08382e7973ef3/client/cmd/isolate/archive.go [modify] https://crrev.com/f209096fbc3f82371c730f819da08382e7973ef3/client/cmd/isolate/batch_archive.go [modify] https://crrev.com/f209096fbc3f82371c730f819da08382e7973ef3/client/cmd/isolate/exp_archive.go
,
Dec 15 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/6ee4feec187b7025a28ffa00f93ecfbd61f1e8ae commit 6ee4feec187b7025a28ffa00f93ecfbd61f1e8ae Author: Michael McGreevy <mcgreevy@chromium.org> Date: Fri Dec 15 01:32:27 2017 Roll isolate binaries generated at infra@9617f0f Isolate sha1s were taken from the following builds: https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-precise-64/10345 https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-mac-10.10-64/9761 https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-win-64/10621 The purpose of this roll is to pick up recent exparchive performance improvements. Bug: 794425 Change-Id: I4b393a71a563681fc4e123e986356d5368040979 Reviewed-on: https://chromium-review.googlesource.com/828181 Reviewed-by: Tim 'mithro' Ansell <tansell@chromium.org> Commit-Queue: Tim 'mithro' Ansell <tansell@chromium.org> Cr-Commit-Position: refs/heads/master@{#524285} [modify] https://crrev.com/6ee4feec187b7025a28ffa00f93ecfbd61f1e8ae/tools/luci-go/linux64/isolate.sha1 [modify] https://crrev.com/6ee4feec187b7025a28ffa00f93ecfbd61f1e8ae/tools/luci-go/mac64/isolate.sha1 [modify] https://crrev.com/6ee4feec187b7025a28ffa00f93ecfbd61f1e8ae/tools/luci-go/win64/isolate.exe.sha1
,
Dec 15 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/tools/build/+/6e40c89e37e08fd6c72df0fa35f63be11f6e3694 commit 6e40c89e37e08fd6c72df0fa35f63be11f6e3694 Author: Michael McGreevy <mcgreevy@chromium.org> Date: Fri Dec 15 01:42:13 2017 isolate recipe: Set --max-concurrent-uploads for batcharchive "8" is the value used for the traditional (non-exparchive) implemenation, and seems to work reasonably well on a development machine, so it's a good place to start for a prod experiment. Bug: 794425 Change-Id: If7ba5beb50d6db3abab7d4461cf0d00261495057 Reviewed-on: https://chromium-review.googlesource.com/828340 Reviewed-by: Tim 'mithro' Ansell <tansell@chromium.org> Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/always-use-exparchive.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipes/chromium_trybot.expected/add_swarming_layout_tests_via_manual_diff_inspection_that_fails.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build1.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build10.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-batch-emiss.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build8.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-batch-bmiss.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipes/chromium_trybot.expected/add_swarming_layout_tests_via_manual_diff_inspection_win.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-batch.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-miss.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build7.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-multi.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipes/chromium_trybot.expected/add_swarming_layout_tests_via_manual_diff_inspection_linux.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build6.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build3.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/README.recipes.md [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build4.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/exparchive-multi-miss.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build2.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build9.json [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/api.py [modify] https://crrev.com/6e40c89e37e08fd6c72df0fa35f63be11f6e3694/scripts/slave/recipe_modules/isolate/examples/full.expected/use-exparchive-20percent-build5.json
,
Dec 15 2017
Re: comment 7: I've taken a deeper look at the code and have a few questions; - mknyszek@ is implementing isolated client (without isolate ingestion support), but because the exp_archive code is under client/cmd/isolate, it is not reusable. I will move the exparchive implemenation under luci/client, along with deleting the code paths in the archive and batcharchive commands which use luci/client/archiver/. I see that the isolated.Archive command is parameterized by a isolated.ArchiveOptions, whereas the exparchive code is parameterized with a isolate.ArchiveOptions (as the traditional archive and batcharchive commands were), which may add some friction for mknyszek. - I failed to see where exp archive does inputs deduplication, e.g. if the same file is listed twice as inputs, which is always happening with batched archive, I couldn't see where these are cut off short upfront, e.g. https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/client/archiver/archiver.go?q=archiver.go&dr&l=288 - Looking at duplicate hashes would work the same way via a local variable map[isolated.HexDigest]bool and it could be done inside BundlingChecker. We now filter items out using the existingDigests map before making the Contains call. Together with caching the results of hashing a given filename, this should have a similar effect to input deduplication: the second time we see a file, we can look up its digest rather than computing it, then find that digest in the existingDigests map and omit it from the Contains call, and also avoid uploading it again. Note: I did prototype the caching of file hashes and found negligible performance difference. - BundlingChecker.check() states it's running only a single invocation at a time. This is highly inefficient, since this scale near infinitely as all the actual work is done on the server. Archiver uses a default limit of 64 concurrent Contains calls. https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/client/archiver/archiver.go?q=archiver.go&dr&l=133 I don't think it's as bad as all that, since the output of the Contains calls is used to feed the Uploader, and I don't think there's any real benefit to the checker getting far ahead of the uploader. I tried it out (with unlimited concurrency for the check step, and found negligible performance difference (4m37s for parallel checks vs 4m48s for sequential), and the behaviour of the uploader (which I set to 8 concurrent uploads) is very similar for both: the number of in-flight uploads for both parallel and sequential checks quickly ramped up to 8, and stayed there, except for a single brief stall in uploads in each case. Have you measured the perfomance of archive runs with fewer concurrent contains calls? Since the performance of the exparchive code now seems to be on par with the non-exparchive code, these will be my priorities: 0. Verify performance in prod (the code with performance improvements has been pushed to 5% of builds; I will verify that latency has dropped). 1. Move exparchive code under luci/client (renaming it in the process; "exp" will no longer be appropriate). 2. Delete old code paths in archive and batcharchive commands. 3. Implement caching of path -> digest. I will wrap up there. I don't intend to implement concurrent Contains calls, as I don't see a significant performance improvement from prototyping it, and I think that #1--3 will take me up to the beginning of my vacation.
,
Dec 15 2017
> Note: I did prototype the caching of file hashes and found negligible performance difference. Did you test with SSD or HDD? > Have you measured the perfomance of archive runs with fewer concurrent contains calls? Yes I had ran numbers and had found significant difference but I don't have them handy anymore. Even so, a 4% performance increase shouldn't be left out IMHO since it has virtually no downsides.
,
Dec 15 2017
> Did you test with SSD or HDD? SSD
,
Dec 18 2017
The following revision refers to this bug: https://chromium.googlesource.com/infra/luci/luci-go.git/+/9aa9c70b3a4267e4cc20c946591a638a47b50448 commit 9aa9c70b3a4267e4cc20c946591a638a47b50448 Author: Michael McGreevy <mcgreevy@chromium.org> Date: Mon Dec 18 21:14:42 2017 [isolate] Memoize file hashes in exparchive uploader. Bug: 794425 Change-Id: Ia530653b3b4ff87318fc8c778db62a23ed1f4a24 Reviewed-on: https://chromium-review.googlesource.com/831314 Reviewed-by: Tim 'mithro' Ansell <tansell@chromium.org> Commit-Queue: Tim 'mithro' Ansell <tansell@chromium.org> [modify] https://crrev.com/9aa9c70b3a4267e4cc20c946591a638a47b50448/client/cmd/isolate/upload_tracker.go
,
Dec 19 2017
Update: After rolling out the upload deduplication and upload concurrency, the upload latency has improved dramatically. See https://docs.google.com/document/d/1z-Sr67J6jBqh4NRK6rr_WhKjXAkYSCzsO6nNWP9oT3M/edit?usp=sharing for a before/after comparision. There are still some outliers when using exparchive, but they are nowhere near as extreme as before. Hopefully rolling out the caching of file hashes will give us further improvements, but I think we're at a point where it's reasonable to push ahead with removing the non-exparchive code. See crbug.com/692940 for progress on that.
,
Dec 19 2017
The following revision refers to this bug: https://chromium.googlesource.com/infra/infra/+/f3ce6077e23bf8a60370c14c92bec822ca3309db commit f3ce6077e23bf8a60370c14c92bec822ca3309db Author: Michael McGreevy <mcgreevy@chromium.org> Date: Tue Dec 19 19:23:48 2017 Roll infra/go/src/go.chromium.org/luci (14 commits) Created with: infra/roll-deps.py infra/go/src/go.chromium.org/luci INFO:root:Rolling [infra/go/src/go.chromium.org/luci]... INFO:root:Rolling [infra/go/src/go.chromium.org/luci]: [94f3a61cb3095de40ee0f9124cda44fcd71cd174] => [9aa9c70b3a4267e4cc20c946591a638a47b50448] infra/go/src/go.chromium.org/luci: 9aa9c70b [isolate] Memoize file hashes in exparchive uploader. b786f3a8 [buildbucket] Make Action serializable. feb44785 [milo] make recipes link useful 69ae1d27 [milo] do not cache >1MB of changes ... snip... aa41a58d [luci-scheduler] Fix regex for configSet pattern a0b76045 [milo] cap getPrevRev recursive call 76cce0c4 [milo] Update API emulation Bug: 794425 Change-Id: I3813b5ad6f1713f3b4e83c9032396ec948a5d981 Reviewed-on: https://chromium-review.googlesource.com/832587 Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> Reviewed-by: Tim 'mithro' Ansell <tansell@chromium.org> Reviewed-by: Phil Wright <philwright@chromium.org> Reviewed-by: Vadim Shtayura <vadimsh@chromium.org> [modify] https://crrev.com/f3ce6077e23bf8a60370c14c92bec822ca3309db/go/src/infra/appengine/luci-migration/discovery/discovery_test.go [modify] https://crrev.com/f3ce6077e23bf8a60370c14c92bec822ca3309db/go/src/infra/appengine/sheriff-o-matic/som/client/mock/milo_mock.go [modify] https://crrev.com/f3ce6077e23bf8a60370c14c92bec822ca3309db/DEPS
,
Dec 19 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/4b953e279cfa134f95326fc717d7699695185290 commit 4b953e279cfa134f95326fc717d7699695185290 Author: Michael McGreevy <mcgreevy@chromium.org> Date: Tue Dec 19 23:29:14 2017 Roll isolate binaries generated at infra@2aafb3b. Isolate sha1s were taken from the following builds: https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-precise-64/10401 https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-mac-10.10-64/9823 https://luci-milo.appspot.com/buildbot/chromium.infra/infra-continuous-win-64/10680 The purpose of this roll is to pick up 9aa9c70, which memoizes file hashes in the isolate exparchive uploader. Bug: 794425 Change-Id: I7e72e6443b06961a683ca8cd689dc3729ead5916 Reviewed-on: https://chromium-review.googlesource.com/834928 Reviewed-by: Andrii Shyshkalov <tandrii@chromium.org> Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> Cr-Commit-Position: refs/heads/master@{#525181} [modify] https://crrev.com/4b953e279cfa134f95326fc717d7699695185290/tools/luci-go/linux64/isolate.sha1 [modify] https://crrev.com/4b953e279cfa134f95326fc717d7699695185290/tools/luci-go/mac64/isolate.sha1 [modify] https://crrev.com/4b953e279cfa134f95326fc717d7699695185290/tools/luci-go/win64/isolate.exe.sha1
,
Dec 20 2017
The following revision refers to this bug: https://chromium.googlesource.com/infra/luci/luci-go.git/+/9375435d7aeacafdde6c0458fdb3652ea5f37f0b commit 9375435d7aeacafdde6c0458fdb3652ea5f37f0b Author: Michael McGreevy <mcgreevy@chromium.org> Date: Wed Dec 20 04:30:59 2017 Set the default for max-concurrent-uploads to 8, which has performed well in prod. Bug: 794425 Change-Id: I7cb320112acbebc51061e986974e4be228d91dc5 Reviewed-on: https://chromium-review.googlesource.com/835948 Reviewed-by: Dave Sansome <dsansome@chromium.org> Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> [modify] https://crrev.com/9375435d7aeacafdde6c0458fdb3652ea5f37f0b/client/cmd/isolate/archive.go [modify] https://crrev.com/9375435d7aeacafdde6c0458fdb3652ea5f37f0b/client/cmd/isolate/batch_archive.go
,
Dec 20 2017
Unfortunately, I have run out of time to move the exparchive implementation code under luci/client, which is a pity, because I wanted to take the opportunity to rethink a couple of the type names ("UploadTracker" was really a placeholder name, and it's unfortunate that there is both a tar_archiver.go and a tarring_archiver.go).
Of the priorities listed in comment #11, I had to reorder them, but I have done 0, 2, 3.
I am going to close this bug, since I've addressed the outlier performance issues that it was intended to cover. If there are specific optimizations to be addressed (e.g. concurrent Contains calls), there should probably be individual bugs for those. Note specifically for the "concurrent Contains" issue, https://chromium-review.googlesource.com/c/infra/luci/luci-go/+/831004 is a starting point, but I haven't had any comments on that CL and now won't have time to respond to any comments.
,
Dec 20 2017
That's fine, thanks for the improvements. I'll get the CL committed since it was fine as-is.
,
Jan 2 2018
The following revision refers to this bug: https://chromium.googlesource.com/infra/luci/luci-go.git/+/da9de9194a67aa4c68a867a10d61648b235c8e11 commit da9de9194a67aa4c68a867a10d61648b235c8e11 Author: Michael McGreevy <mcgreevy@chromium.org> Date: Tue Jan 02 04:46:01 2018 [isolate] Allow checker to make concurrent request to Contains Bug: 794425 Change-Id: I9ff2bf70d9254749611f8ad3c0674fd452536416 Reviewed-on: https://chromium-review.googlesource.com/831004 Commit-Queue: Michael McGreevy <mcgreevy@chromium.org> Reviewed-by: Marc-Antoine Ruel <maruel@chromium.org> [modify] https://crrev.com/da9de9194a67aa4c68a867a10d61648b235c8e11/client/cmd/isolate/archive.go [modify] https://crrev.com/da9de9194a67aa4c68a867a10d61648b235c8e11/client/cmd/isolate/batch_archive.go [modify] https://crrev.com/da9de9194a67aa4c68a867a10d61648b235c8e11/client/cmd/isolate/checker.go [modify] https://crrev.com/da9de9194a67aa4c68a867a10d61648b235c8e11/client/cmd/isolate/checker_test.go [modify] https://crrev.com/da9de9194a67aa4c68a867a10d61648b235c8e11/client/cmd/isolate/uploader.go |
||||
►
Sign in to add a comment |
||||
Comment 1 by mcgreevy@chromium.org
, Dec 13 2017