would be neat if bisect-builds.py had a way to run telemetry benchmarks instead of just starting chrome |
|||||
Issue description... would make my live a bit easier. Using bisect bots appears to not work if the benchmark wasn't present throughout the entire range. So currently, I'm using git bisect to build chrome locally, then checkout origin/master and run the benchmark. Takes kinda long...
,
Mar 17 2016
I was filing this bug while trying to figure out why v8.todomvc/v8_execution_time_total/AppLoad/Polymer is so much slower than .../Polymer_ref The reference build is from M48, but the benchmark was only added two weeks ago, so I bisected from the M48 branch point to ToT.
,
Mar 17 2016
So it sounds like the ability to bisect Chrome on the bisect bots with ToT telemetry tests would have helped in this case?
,
Mar 17 2016
yeah. on a high level, being able to tell why the reference build produces different results than ToT is nice. I guess that only really matters when you add a new benchmark... once you got that under control, you can just look at historical data from the bots...
,
Mar 17 2016
This may not work effectively given that most of our benchmarks these days only support tracing based metrics. So in order to run the benchmarks, the Chrome binary has to be able to produce certain type of trace events. If these trace events are probes that only added after revision XYZ, then TOT benchmarks will not work again Chrome older than XYZ.
,
Mar 17 2016
I guess as long as the reference build supports the benchmark, it's safe to assume that you can bisect back until the reference build's rev at least. Another thing we had in V8's internal benchmark runner was that when you added a new benchmark, the system would run the benchmark on older revisions going back in time as much as resources allowed (and the benchmark worked). I guess with perf infra which gets hammered on 24/7 there's not much resources left, but it would be neat to have graphs that go back to the reference build for newly added benchmarks :)
,
Mar 17 2016
btw, just to clarify: this feature request is mainly for tools/bisect-builds.py - bisect bots also only run benchmarks that are on the waterfall, while I often need to bisect regressions that can easily be translated to a telemetry benchmark but probably don't hit the bar for being on the waterfall.
,
Mar 22 2016
,
Mar 22 2016
,
Feb 3 2017
,
Feb 6 2017
This is an old-ish auto-bisect bugs. This might be fixed with Pinpoint, or it's not going to get fixed. If you disagree with me, please reopen the bug and mark it "Untriaged". |
|||||
►
Sign in to add a comment |
|||||
Comment 1 by sullivan@chromium.org
, Mar 17 2016Components: Tests>AutoBisect