RuntimeStatsMetric computeInteractiveTime |
|||||
Issue description
When I enable runtimeStatsMetric on browse pagesets from systemhealth, it fails in computeInteractiveTime.
What steps will reproduce the problem?
(1) Add a new benchmark in tools/perf/benchmarks/v8.py:
@benchmark.Disabled('android', 'win', 'reference')
class V8BrowseRuntimeStats(_Top25RuntimeStats):
@classmethod
def Name(cls):
return 'v8.runtime_stats.browse'
def CreateStorySet(self, options):
return page_sets.SystemHealthStorySet(platform='desktop', case='browse')
(2) Run the benchmark:
./tools/perf/run_benchmark run v8.runtime_stats.browse
What is the expected result?
Should not generate errors
What happens instead?
I get the following error:
MapFunctionError: getOnlyElement was passed an iterable with multiple elements.
at Object.getOnlyElement (/tracing/base/iteration_helpers.html:44:13)
at Object.tr.b.iterItems (/tracing/metrics/v8/runtime_stats_metric.html:53:30)
at Object.iterItems (/tracing/base/iteration_helpers.html:173:10)
at computeInteractiveTime_ (/tracing/metrics/v8/runtime_stats_metric.html:43:10)
at new runtimeStatsMetric (/tracing/metrics/v8/runtime_stats_metric.html:119:27)
,
Jan 20 2017
It doesn't make sense to create a histogram per navigation. Telemetry benchmark are designed so that each test should be outputting an expected number of histogram & such number should be fixed. I think we should squash all the data from same metric in a single test into a same histogram. From my point of view, it's most important to track the overall performance of v8 runtime call stats on a whole user workflow. Details like which navigation results to which data can be done in detailed debugging through looking at the trace or using diagnostic debugging data.
,
Jan 20 2017
I guess moving forward, we need to define the start and end time of each navigation, and the accumulate the runtime stats coming from each navigation. Ned, is there any prior examples in other metrics we can follow suit?
,
Jan 23 2017
,
Jan 23 2017
I am not sure what you mean by " accumulate the runtime stats coming from each navigation." To me it doesn't make sense to have a benchmark that output metrics like "IC:facebook.com", "Callback:youtube.com", so I am not sure how you would do so & why it's necessary. An example metrics that accumulate data from multiple navigation per story test you can follow suit is loadingMetrics: https://github.com/catapult-project/catapult/blob/master/tracing/tracing/metrics/system_health/loading_metric.html#L408
,
Jan 27 2017
Mythrie, I believe this bug has been superseded by the bucketing on UE, if this is the case, please close this bug.
,
Jan 27 2017
Yes, we no longer need this feature. The startup and the total RuntimeStats will be integrated into a single metric by bucketing runtimeStats based on UE. That work is tracked by this bug: https://bugs.chromium.org/p/chromium/issues/detail?id=686250 |
|||||
►
Sign in to add a comment |
|||||
Comment 1 by fmea...@chromium.org
, Jan 20 2017