Re-enable rasterize_and_record_micro telemetry benchmark on all platforms |
||||||
Issue description
% tools/perf/run_benchmark try android-nexus5 rasterize_and_record_micro.top_25
['AddCommandLineArgs', 'Benchmark', 'BenchmarkMetadata', 'Disabled', 'Enabled', 'InvalidOptionsError', 'Owner', 'ProcessCommandLineArgs', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'command_line', 'decorators', 'legacy_page_test', 'optparse', 'story_runner', 'timeline_based_measurement']
usage: Run telemetry benchmarks on trybot. You can add all the benchmark options available except the --browser option
[-h] [--repo_path <repo path>] [--deps_revision <deps revision>]
<trybot name> <benchmark name>
Run telemetry benchmarks on trybot. You can add all the benchmark options available except the --browser option: error: Benchmark rasterize_and_record_micro.top_25 is disabled on win, mac, android, and trybot's platform is android. To run the benchmark on trybot anyway, add --also-run-disabled-tests option.
Also on chromeperf.appspot.com, the same results when you try to look at results.
This also affects "rasterize_and_record_micro.key_mobile_sites" which.. should be running on android. What's going on here? This test suite is pretty critical to us not regressing the graphics stack.
,
Mar 23 2017
Issue 610018 has been merged into this issue.
,
Mar 23 2017
,
Mar 23 2017
,
Mar 28 2017
,
Mar 29 2017
Brief notes from meeting discussion: - Speed team is considering best next steps and will follow up with thoughts. Some discussion around potentially separating (1) support for the benchmark in its current form on Cluster Telemetry, from (2) support for perf waterfall. - consider whether we need a new type of officially Speed-team-supported harness which allows setting up a real world web page, running focused paint and gpu benchmarks, and collecting results. Rasterize and record is somewhat unique among existing set of tests in that it blends micro benchmark infrastructure with real world page set input. - this benchmark was once run with flags to enable and tracing data to produce output, but it led to dispersed code with comments explaining which portions were part of the benchmark. So there was an explicit change made a couple of years ago to move it to its own cc gpu benchmark calling into an opaque testing class. The important difference is that the data re: which tests to run is now provided by the gpu benchmarking extension, and results are sent back via the existing gpu benchmarking js results api. - vmpstr@/wkorman@ can help with implementation once we decide how we'd like to evolve the benchmark implementation. In meantime I plan to still explore what exact failures/issues are on non-Linux platforms and will add notes shortly.
,
Apr 5 2017
Notes from talk with Speed team today: - Q3 goal for GPU team to update test to use static input pages similar to blink_perf tests. One key reason is to remove dependency on web page replay input, plus javascript injection, as sources of potential maintenance headaches. - Keep existing test auto-running on Linux perf benchmarks for now. Group did not think there was immediate value to spending time to get it up again on other platforms in advance of the work described above. - Test measures software raster only. Less relevant for Android, most relevant for Linux (where we do the least GPU raster today). Closing this bug per current plan of action. Vlad will open one to track work to move to blink_perf-like input, and expects to share notes/one-pager of plan detail soon. |
||||||
►
Sign in to add a comment |
||||||
Comment 1 by jbroman@chromium.org
, Mar 15 2017