Issue metadata
Sign in to add a comment
|
Device availability affects results of some tests |
||||||||||||||||||||
Issue descriptionChecking the buildbot status page shows that the changes are connected to changes in availabilty of devices to run on. When devices are missing, the values change compared to when all are available. This might have to do with the amount of tests and the test order per device.
,
Jun 28 2017
You're right that the number of devices most likely affect the results with variations in the scores, but running at one device also comes with a price since when the device is offline: these are consumer devices, they always break sooner or later, especially if they run tests non-stop. One device will also take a lot longer to run all the perf tests, so blame lists will be longer for regressions, which is tough since we don't have auto-bisection. It's a tough call to make: have one device to push hard, with longer blamelists on regressions, or several identical devices for better reliability, faster executed tests and more time to cool down in between. I think the latter is better, but I'm open to discuss other strategies. Either way I don't have any actionable item in this bug, but feel free ot start a thread at g/webrtc-perf-sheriffs to discuss this further. |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by srte@chromium.org
, Jun 27 2017