crosperf: could not show results with 2 or more runs: |
|||||||
Issue description
benchmark: speedometer {
suite:telemetry_Crosperf
iterations: 2
}
OUTPUT: DutWorker[dut="172.17.211.78", label="test_image_1"] finished.
Traceback (most recent call last):
File "./crosperf.py", line 128, in <module>
Main(sys.argv)
File "./crosperf.py", line 124, in Main
runner.Run()
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/crosperf/experiment_runner.py", line 266, in Run
self._PrintTable(self._experiment)
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/crosperf/experiment_runner.py", line 196, in _PrintTable
self.l.LogOutput(TextResultsReport(experiment).GetReport())
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/crosperf/results_report.py", line 303, in GetReport
summary_table = self.GetSummaryTables()
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/crosperf/results_report.py", line 125, in GetSummaryTables
'summary')
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/crosperf/results_report.py", line 178, in _GetTables
cell_table = tf.GetCellTable(table_type)
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/cros_utils/tabulator.py", line 956, in GetCellTable
self.GenerateCellTable(table_type)
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/cros_utils/tabulator.py", line 819, in GenerateCellTable
column.result.Compute(cell, values, baseline)
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/cros_utils/tabulator.py", line 269, in Compute
if _AllFloat(values):
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/cros_utils/tabulator.py", line 76, in _AllFloat
return all([misc.IsFloat(v) for v in values])
File "/usr/local/google/home/yunlian/clean/src/third_party/toolchain-utils/cros_utils/misc.py", line 279, in IsFloat
float(text)
,
Aug 22 2016
The result is that for each run of speedometer, it runs the benchmark 10 times. So in the result, it stores the result as a list instead of a single value. I am not sure whether this is a special case for speedometer or this also happens for other benchmarks.
,
Aug 22 2016
In the result of speedometer, it says below.
Could we check the type of the result, if it is a list, then we get the average intead?
"Total": {
"summary": {
"std": 795.32463138994694,
"name": "Total",
"important": true,
"values": [
18341.509999999998,
17180.990000000002,
18998.57,
18472.639999999999,
18262.095000000001,
17778.895,
18828.264999999999,
19390.48,
17189.509999999998,
19301.064999999999
],
"units": "ms",
"type": "list_of_scalar_values"
},
,
Aug 24 2016
The following revision refers to this bug: https://chrome-internal.googlesource.com/chromeos/toolchain-utils/+/1ced193baabbf6d6aa1aa3913bcff714e5ae376d commit 1ced193baabbf6d6aa1aa3913bcff714e5ae376d Author: Yunlian Jiang <yunlian@chromium.org> Date: Mon Aug 22 23:56:24 2016
,
Sep 28 2016
,
Oct 7 2016
,
Nov 19 2016
,
Jan 21 2017
,
Mar 4 2017
,
Mar 17 2017
|
|||||||
►
Sign in to add a comment |
|||||||
Comment 1 by rahulchaudhry@chromium.org
, Aug 22 2016