MBErr: target "chromium_builder_tests" not found in //testing/buildbot/gn_isolate_map.pyl |
||||||||
Issue description"analyze" is flaky. This issue was created automatically by the chromium-try-flakes app. Please find the right owner to fix the respective test/step and assign this issue to them. If the step/test is not infrastructure-related (e.g. flaky test), please add Sheriff-Chromium label and change issue status to Untriaged. When done, please remove the issue from Trooper Bug Queue by removing the Infra-Troopers label. We have detected 11 recent flakes. List of all flakes can be found at https://chromium-try-flakes.appspot.com/all_flake_occurrences?key=ahVzfmNocm9taXVtLXRyeS1mbGFrZXNyEgsSBUZsYWtlIgdhbmFseXplDA. This flaky test/step was previously tracked in issue 645148 .
,
Sep 20 2016
+cc people who may know about mb.py
,
Sep 20 2016
They were real failures and should be fixed now (I reverted the broken change on Friday and landed a newer version over the weekend). Basically "analyze" is not and should never be flaky :). Do we have some way of marking steps as not supposed to be flaky so that we call this a bug rather than a flake from the beginning?
,
Sep 20 2016
This is a bug for a flaky step :-). IMHO we should have no flakes in steps at all. Only test flakes should be allowed and even then they should be addressed.
,
Sep 20 2016
Re-adding sergiyb@, sorry ... This is perhaps a larger debate/question, but I don't consider a step that fails on a trybot because the tree was in fact broken at the same time (and the waterfall builders were failing in the same way) to be "flaky". I've asked separately whether we can detect that situation and not file bugs for them, and this is an example of that as well. As for whether we should have flaky steps at all, sure, I'd agree, but that's a different world than the one we live in. However, today we clearly expect some steps to be flaky sometimes, but I also expect some other steps to *never* be flaky and so seeing one start to be flaky should probably be treated more seriously than a step that is known to be flaky.
,
Sep 20 2016
reassigning to Sergiyb to answer the flakiness pipeline questions. The actual issue related to analyze has been fixed, so feel free to close this after you've answered the questions.
,
Sep 22 2016
(I've removed myself from CC because I occasionally use stars to track bugs, sorry if that created a wrong impression here - I do care about this discussion) Also it appears there has been misunderstanding. I read comment #3 as suggesting to report 'analyze' failures differently than other flakes, because they are so important and "should never be flaky". However, added explanation in #5 has clarified your point. This is indeed a false positive from chromium-try-flakes. We have some mechanisms that should prevent this from happening, but they are clearly not enough. As for your proposal to add non-flaky-tests list, what if the step will become flaky - we then won't detect it as such.
,
Sep 22 2016
Please also see https://bugs.chromium.org/p/chromium/issues/detail?id=583446#c22.
,
Sep 22 2016
There were two different questions in #3 and #5. In #3, I was indeed asking for some different sort of threshold to be applied for steps that are more or less 100% stable (like analyze or webkit_lint) than for steps that are known already to be flaky (like, say, browser_tests). In #5, I was asking about whether we can avoid false positives. I think probably the thing to do is for me to file separate bugs for both issues, to make things clearer and avoid confusing the issue w/ the specifics of this particular bug. |
||||||||
►
Sign in to add a comment |
||||||||
Comment 1 by hinoka@chromium.org
, Sep 20 2016Summary: MBErr: target "chromium_builder_tests" not found in //testing/buildbot/gn_isolate_map.pyl (was: "analyze" is flaky)