New issue
Advanced search Search tips

Issue 595618 link

Starred by 1 user

Issue metadata

Status: WontFix
Owner: ----
Closed: Feb 2017
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug



Sign in to add a comment

would be neat if bisect-builds.py had a way to run telemetry benchmarks instead of just starting chrome

Project Member Reported by jochen@chromium.org, Mar 17 2016

Issue description

... would make my live a bit easier.

Using bisect bots appears to not work if the benchmark wasn't present throughout the entire range. So currently, I'm using git bisect to build chrome locally, then checkout origin/master and run the benchmark. Takes kinda long...
 
Cc: dtu@chromium.org
Components: Tests>AutoBisect
Alternately, we had been thinking about making it possible to run bisect against ToT telemetry benchmarks. That would solve the general issue of not being able to bisect if the bisect wasn't present throughout the range.

But I saw another bug in which you pinpointed the regression to a change in the benchmark. So running against ToT benchmarks would have given confusing results in that case.

Is the reason you can't just narrow the range to where the benchmark was present that you can't start the range inside a catapult roll?

Comment 2 by jochen@chromium.org, Mar 17 2016

I was filing this bug while trying to figure out why v8.todomvc/v8_execution_time_total/AppLoad/Polymer is so much slower than .../Polymer_ref

The reference build is from M48, but the benchmark was only added two weeks ago, so I bisected from the M48 branch point to ToT.
So it sounds like the ability to bisect Chrome on the bisect bots with ToT telemetry tests would have helped in this case?

Comment 4 by jochen@chromium.org, Mar 17 2016

yeah.

on a high level, being able to tell why the reference build produces different results than ToT is nice.

I guess that only really matters when you add a new benchmark... once you got that under control, you can just look at historical data from the bots...
This may not work effectively given that most of our benchmarks these days only support tracing based metrics. So in order to run the benchmarks, the Chrome binary has to be able to produce certain type of trace events. If these trace events are probes that only added after revision XYZ, then TOT benchmarks will not work again Chrome older than XYZ.

Comment 6 by jochen@chromium.org, Mar 17 2016

I guess as long as the reference build supports the benchmark, it's safe to assume that you can bisect back until the reference build's rev at least.

Another thing we had in V8's internal benchmark runner was that when you added a new benchmark, the system would run the benchmark on older revisions going back in time as much as resources allowed (and the benchmark worked).

I guess with perf infra which gets hammered on 24/7 there's not much resources left, but it would be neat to have graphs that go back to the reference build for newly added benchmarks :)

Comment 7 by jochen@chromium.org, Mar 17 2016

btw, just to clarify: this feature request is mainly for tools/bisect-builds.py - bisect bots also only run benchmarks that are on the waterfall, while I often need to bisect regressions that can easily be translated to a telemetry benchmark but probably don't hit the bar for being on the waterfall.
Status: Available (was: Untriaged)
Labels: triaged
Components: Speed>Bisection
Status: WontFix (was: Available)
This is an old-ish auto-bisect bugs. This might be fixed with Pinpoint, or it's not going to get fixed. If you disagree with me, please reopen the bug and mark it "Untriaged".

Sign in to add a comment