"ensure_goma.install" is flaky |
|||
Issue description"ensure_goma.install" is flaky. This issue was created automatically by the chromium-try-flakes app. Please find the right owner to fix the respective test/step and assign this issue to them. If the step/test is infrastructure-related, please add Infra-Troopers label and change issue status to Untriaged. When done, please remove the issue from Sheriff Bug Queue by removing the Sheriff-Chromium label. We have detected 5 recent flakes. List of all flakes can be found at https://chromium-try-flakes.appspot.com/all_flake_occurrences?key=ahVzfmNocm9taXVtLXRyeS1mbGFrZXNyHgsSBUZsYWtlIhNlbnN1cmVfZ29tYS5pbnN0YWxsDA.
,
Sep 2 2016
Assigning to myself as the current trooper. I sounds like an infra failure indeed - will investigate. All these flakes happened roughly at the same time: 2016-08-31 21:38:19 UTC I also heard of a brief Goma outage recently - might be related.
,
Sep 2 2016
The Goma outage I referred to was on Sep 2 5am UTC - which was *after* all the flakes. So it's not it...
,
Sep 2 2016
From https://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_chromeos_compile_dbg_ng/builds/256102/steps/compile%20%28with%20patch%29/logs/stdio : /usr/bin/python /b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py restart /usr/bin/python: can't open file '/b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py': [Errno 2] No such file or directory /usr/bin/python /b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py jsonstatus /tmp/tmpo1drs3.json /usr/bin/python: can't open file '/b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py': [Errno 2] No such file or directory error while sending ts mon json_file=/tmp/tmpo1drs3.json: No JSON object could be decoded /usr/bin/python /b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py stop /usr/bin/python: can't open file '/b/rr/tmp5FPMlV/rw/checkout/goma/goma_ctl.py': [Errno 2] No such file or directory No compiler_proxy-subproc.INFO to upload No compiler_proxy.INFO to upload [...] error: failed to start goma; fallback has been disabled Traceback (most recent call last): File "/b/rr/tmp5FPMlV/rw/checkout/scripts/slave/compile.py", line 546, in <module> sys.exit(real_main()) File "/b/rr/tmp5FPMlV/rw/checkout/scripts/slave/compile.py", line 528, in real_main goma_ready, goma_cloudtail = goma_setup(options, env) File "/b/rr/tmp5FPMlV/rw/checkout/scripts/slave/compile.py", line 194, in goma_setup raise Exception('failed to start goma') Exception: failed to start goma step returned non-zero exit code: 1
,
Sep 2 2016
Same in https://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_rel_ng/builds/290162/steps/compile%20%28with%20patch%29/logs/stdio https://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_chromeos_ozone_rel_ng/builds/228252/steps/compile%20%28with%20patch%29/logs/stdio https://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_compile_dbg_ng/builds/150117/steps/compile%20%28with%20patch%29/logs/stdio Slaves: slave335-c4 slave635-c4 slave234-c4 slave169-c4 So far all that seems random, except for the timestamps: all 4 errors happened within 13 seconds of each other, between 2016-08-31 21:38:06 UTC and 2016-08-31 21:38:19 UTC Maybe there was actually a brief goma outage on the server side? Or a network outage on GCE?
,
Sep 2 2016
http://shortn/_RtWH2W1eCC doesn't show any network problems. But then, it can only see those longer than 2m, and this one was ~13 seconds... The "missing" file didn't change in 5 months: https://chrome-internal.googlesource.com/chrome/tools/goma/linux/+/master/i686/goma_ctl.py
,
Sep 6 2016
No new reports, so I'm assuming this was a very short network flake. Closing. |
|||
►
Sign in to add a comment |
|||
Comment 1 by magjed@chromium.org
, Sep 2 2016