The steps for adding a new press benchmark is pretty difficult:
1) Add new Measurement class. Here you gonna define the logic of:
i) Trigger test run
ii) Wait for test run to finish
iii) Parse the test results from the press benchmark & translate it the results format that Telemetry understands.
2) Create a benchmark that uses the Measurement class. In this benchmark you will also:
i) Create a PageSet instance that define all the tests to run in the benchmark.
ii) Create many Page instances for running sub tests in the press benchmark & add them the PageSet instance. Here, you may have some logic for navigating to each sub test in the press benchmark (example: https://cs.chromium.org/chromium/src/tools/perf/benchmarks/dromaeo.py?rcl=049ab8b91cb873da45755e6c678d153b7fc3d856&l=122)
Those are two many boiler plate code & needless abstractions to go through whereas the essential business logic are just 3 things: creating the test URL, triggering test run, wait for test to finish, parse test results & translate to Telemetry results.
To simplify all of this, I propose that we create a new press benchmark harness that allows simplify the task of adding new press benchmark with simpler hooks for people to fill in.
Another requirement is that this press benchmark harness should also allow running all the press benchmarks in one step, i.e: "tools/perf/run_benchmarks press_benchmarks" (custom suite can be run with --tag-filter=..". This is because of our initiative to remove the number of benchmark steps on the waterfall (issue
The steps for adding a new press benchmark is pretty difficult:
1) Add new Measurement class. Here you gonna define the logic of:
i) Trigger test run
ii) Wait for test run to finish
iii) Parse the test results from the press benchmark & translate it the results format that Telemetry understands.
2) Create a benchmark that uses the Measurement class. In this benchmark you will also:
i) Create a PageSet instance that define all the tests to run in the benchmark.
ii) Create many Page instances for running sub tests in the press benchmark & add them the PageSet instance. Here, you may have some logic for navigating to each sub test in the press benchmark (example: https://cs.chromium.org/chromium/src/tools/perf/benchmarks/dromaeo.py?rcl=049ab8b91cb873da45755e6c678d153b7fc3d856&l=122)
Those are two many boiler plate code & needless abstractions to go through whereas the essential business logic are just 3 things: creating the test URL, triggering test run, wait for test to finish, parse test results & translate to Telemetry results.
To simplify all of this, I propose that we create a new press benchmark harness that allows simplify the task of adding new press benchmark with simpler hooks for people to fill in.
Another requirement is that this press benchmark harness should also allow running all the press benchmarks in one step, i.e: "tools/perf/run_benchmarks press_benchmarks" (custom suite can be run with --tag-filter=..". This is because of our initiative to remove the number of benchmark steps on the waterfall (issue 713327).
Comment 1 by nedngu...@google.com
, Apr 21 2017