The current naming of each test run on the chromium.perf waterfall is
benchmark_name on <Gpu dimensions> on <platform> on <task_os> we want it to just be benchmark_name. This is important for multiple reasons:
1) linking the perf dashboard to the waterfall
2) SoM integration
3) Easier to consume state of waterfall at a glance for sheriffing
martiniss@ authored a doc on this:
https://docs.google.com/document/d/1lk8Ia0yQyiFTw51TDIxG4cvyhmBIuqE5ERfDr03K4v4/edit
And in the future we would like to move perf to more of a model where there is not a one to one correlation with build step and benchmark, but I think this is a little further out.
I think there is a short term (although not very elegant) solution for perf. I threw together a brief CL to illustrate this: https://chromium-review.googlesource.com/c/427019/
martiniss@ thinks we could do it slightly differently and will hopefully take ownership of this task.
Comment 1 by eyaich@chromium.org
, Jan 11 2017