Run (some?) tests on bots on chromium.chrome waterfall? |
||||
Issue descriptionWe have a bunch of bots doing official, branded builds on the main waterfall. They are on the chromium.chrome sub-waterfall. These bots currently only build but don't run tests. We don't have any bots running tests in official mode on the main waterfall. So if a test breaks in official mode only (which happens maybe once every 1-2 weeks), we notice either by the internal QA folks noticing that the internal waterfalls are red, or by the clang tot waterfall watchers noticing (that waterfall also has a few bots doing builds in official and then running tests). If we made the bots on chromium.chrome run (at least some) tests, then folks breaking them would notice much faster and wouldn't have to scramble as much to fix. (dpranke, please WontFix / cc people as you see fit. I think not running tests made sense at some point in the past, but it probably makes less sense nowadays.)
,
Aug 20
There are a couple of related issues here. First, I actually want sheriffs to be monitoring the internal desktop (official) builds, though they don't do so today and we don't yet have an actual plan for changing that. I think that if we were doing that, we wouldn't actually need the chromium.chrome waterfall anymore, we'd just use official.desktop.continuous. Second, I agree that we should be running tests on some config that *is* being monitored by sheriffs to catch as many of the official build breakages as possible. A complication to both of these points is that the results of the chromium.chrome builds (which builds are green) are currently used by the release scripts to pick revisions for the branches. We intentionally (at least at this point) do *not* want to gate branching criteria on whether all of the tests passed or not, because often the test failures are not severe enough to delay the branch. So, if we treated chromium.chrome as any other builder and just started adding tests, that might have side effects we didn't want. I *think* that if we did a builder/tester split for the configs, then things would probably be better. Alternatively, we could look at modifying the release scripts to know about test failures, or we could decide that we really *did* want the branches to not use revisions that had failing tests. +mmoss, jbudorick because we've discussed this off and on as part of the luci migration.
,
Aug 21
Running the tests on a separate tester makes sense to me.
,
Aug 31
,
Jan 11
This issue has an owner, a component and a priority, but is still listed as untriaged or unconfirmed. By definition, this bug is triaged. Changing status to "assigned". Please reach out to me if you disagree with how I've done this. |
||||
►
Sign in to add a comment |
||||
Comment 1 by thakis@chromium.org
, Aug 18