Pinpoint try jobs failing with "No module named mock" |
||
Issue descriptionI've been kicking off a few jobs e.g.: https://pinpoint-dot-chromeperf.appspot.com/job/11dd34a7640000 https://pinpoint-dot-chromeperf.appspot.com/job/12952752e40000 where the "without patch" runs (mostly) fine, but with "with patch" fails with the "No module named mock" error. How can this be? Both try jobs apply a patch that has nothing to do with the test infra, and were pinned to older chromium/src revisions (from Sep 9 and 16, before issue 888713 ever happened). P1: because we want to get some try job data for scoring Q3 OKRs.
,
Oct 2
But .. why? Why are those two builds different?
,
Oct 3
In both jobs, looks like the build without patch is from the perf waterfall, and the build with patch is from the Pinpoint builder. I looked at the output of bot_update for the Pinpoint build, and it looks like chromium/src is fixed at the correct revision (85c9b6a983ddd8fdde9e1b4d24e497c7d14105e5, dated Sep 16), but catapult is synced to 218f46686f2f7e110a17aaa486becce46a99d5d6, dated Sep 21, which is in the failing range from issue 888713 . https://ci.chromium.org/buildbot/tryserver.chromium.perf/Android%20Compile%20Perf/3222 Maybe this is also related to issue 891069 ?
,
Oct 3
|
||
►
Sign in to add a comment |
||
Comment 1 by simonhatch@chromium.org
, Oct 2So full error is here: Traceback (most recent call last): <module> at /b/swarming/w/ir/tools/perf/run_benchmark:27 sys.exit(main()) main at /b/swarming/w/ir/tools/perf/run_benchmark:23 return benchmark_runner.main(config, [trybot_command.Trybot]) main at /b/swarming/w/ir/third_party/catapult/telemetry/telemetry/benchmark_runner.py:418 command.AddCommandLineArgs(parser, environment) AddCommandLineArgs at /b/swarming/w/ir/third_party/catapult/telemetry/telemetry/benchmark_runner.py:250 matching_benchmarks += _MatchBenchmarkName(arg, environment) _MatchBenchmarkName at /b/swarming/w/ir/third_party/catapult/telemetry/telemetry/benchmark_runner.py:350 for benchmark_class in _Benchmarks(environment): Cacher at /b/swarming/w/ir/third_party/catapult/telemetry/telemetry/decorators.py:35 cacher.__cache[key] = obj(*args, **kwargs) _Benchmarks at /b/swarming/w/ir/third_party/catapult/telemetry/telemetry/benchmark_runner.py:328 index_by_class_name=True).values() DiscoverClasses at /b/swarming/w/ir/third_party/catapult/common/py_utils/py_utils/discover.py:99 modules = DiscoverModules(start_dir, top_level_dir, pattern) DiscoverModules at /b/swarming/w/ir/third_party/catapult/common/py_utils/py_utils/discover.py:58 module = importlib.import_module(module_name) import_module at /usr/lib/python2.7/importlib/__init__.py:37 __import__(name) <module> at /b/swarming/w/ir/tools/perf/contrib/oilpan/oilpan_gc_times_unittest.py:15 import mock # pylint: disable=import-error ImportError: No module named mock Looks like telemetry is throwing the error while trying to load the full list of benchmarks. I'm not really sure how this could happen unless we're building something slightly different from main waterfall (ie. including files we shouldn't be?).