Benchmarks vs. # Cores report for win and mac |
||
Issue descriptionI'd like to see if there are any differences in the ability for an individual # of cores to detect regressions vs. any other on mac and windows. Could we get the same coverage by only supporting 4 cores that we would by having a diversity of core #s? If the answer is no, I'd like to propose that we choose one machine type to go with and replace anything non-standard with something we can get in bulk - controlling the HW (not the GPU) with the idea that the lab won't catch everything - but will do the majority of the work in catching regressions. For all else, there's UMA. If we can do that, we spend less money on Infra, can have more deterministic playbooks and care-and-feeding docs for new devices to get them into the lab when we need more machines and we can have conversations about scaling.
,
Mar 17 2017
It seems like a worthwhile idea. I'm not sure how we would measure the responsiveness of particular configurations to arbitrary regressions. Off the top of my head, I would expect that for I/O issues the number of cores doesn't matter, but having a spinning disk and a modest amount of memory does matter. For CPU issues I would expect that the number of cores would sometimes matter, so having more than the consumer mode (two, I believe?) could hide regressions. On the other hand, as long as we are also monitoring total CPU usage and power usage we should still detect regressions that increase CPU usage.
,
Mar 17 2017
I'm not sure how much this is applicable to the Mac as I don't think there are lots of core options? And when there are, such as with the 2-core and 4-core MacBook Pros, other hardware options become the deciding factors. For example, it looks like nowadays all MBPs are Retina machines but if that wasn't the case we would need a Retina MBP regardless of number of cores.
,
Jul 24
|
||
►
Sign in to add a comment |
||
Comment 1 by jsc...@chromium.org
, Mar 17 2017