Issue metadata
Sign in to add a comment
|
WebRTC apm_timing perf tests are randomly changing |
||||||||||||||||||||||
Issue descriptionSeveral of the apm_timing tests, in particular the ones running on mobile platforms, are showing changes that have no correlation at all with code changes. They can be both positive and negative changes. This seems to be a general problem with these tests, or how they are run on bots, that something other than the code under test changes from time to time. How can we make those tests more robust?
,
Mar 31 2017
,
Mar 31 2017
,
Mar 31 2017
Issue 705471 has been merged into this issue.
,
Apr 4 2017
,
Apr 18 2017
I suspect this is either caused by slightly different device characteristics (since these bots have >1 device connected) and/or temperature changes on the devices due to high load. I wanted to look at logs to see how many devices were used but 712551 prevented me from doing so. The graphs have recovered to their previous behavior now so at least it was temporary. In general we might get a little more stable metrics if we limit ourselves to one device per bot, but then in turn we get a higher load on that device and queued builds and/or longer blamelists when the device goes down. I think the current setup is better. |
|||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||
Comment 1 by hlundin@chromium.org
, Mar 31 2017