Since Chrome OS is the source of our AFDO profiles, we get to switch to field-based profiles with them in Q3.
Field-based profiles are most effective when applied to release branches, since our most accurate profiles for Chrome version $N will come when Chrome version $N is on the beta/stable channel.
...So, ideally, we'd keep profiles on release branches up-to-date, rather than freezing them when the branch happens.
Before doing this, I need to verify:
- that we do thorough perf testing on release branches
(if not/if not continuously, that kicking off pinpoint bots on release branches is easy to automate)
- that profiles sourced from beta/stable Chromes are mostly consistent
(both to keep perf noise down, and to keep incremental updates across revisions small)
For the first point, it's probably best to have presubmit pinpoint runs anyway, since "your roll broke the release build's perf" isn't an email I'd like to start my day. :) The question is more "do I need to test all of the things, or is a smoke test sufficient?" It looks like CrOS rolls new profiles ~daily (for benchmark-based ones, at least), so it's likely best to keep perf testing time to O(a few hours), or to only pick the "best" of the profiles from the last $N rolls, etc.