New issue
Advanced search Search tips

Issue 787081 link

Starred by 8 users

Issue metadata

Status: WontFix
Owner:
Closed: Oct 25
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 3
Type: Bug

Blocked on:
issue 800159



Sign in to add a comment

autotest preupload hook always fails: test_lucifer: test_monkeypatch: ImportError: No module named chromite

Project Member Reported by xiaochu@chromium.org, Nov 20 2017

Issue description

(cr) (system_utils_fix) xiaochu@xiaochu0 ~/trunk/src/third_party/autotest/files $ repo upload . --current-branch
Errors in PROJECT *chromiumos/third_party/autotest*!
    COMMIT 108643b7:
        Description:
            >platform_ImageLoader: use utils.system()
            >
            >Uses utils.system(...) instead of subprocess.call(...).
            >
            >This autotest is flaky (crashes at subprocess.call()) on certain boards.
            >
            >BUG= chromium:785509 
            >TEST=autotest, print return value of utils.system to verify its return
            >value is retcode (0 on success).
            >
            >Change-Id: I8e74bd5e352dbbdee3e765b0626e2f2948ea0d45
            >
            >
        Errors:
            * Hook script "./bin/test_lucifer" failed with code 1:
              ============================= test session starts ==============================
              platform linux2 -- Python 2.7.10, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
              rootdir: /mnt/host/source/src/third_party/autotest/files/venv, inifile: pytest.ini
              plugins: cov-2.5.1, catchlog-1.2.2
              collected 27 items
              
              lucifer/autotest_unittest.py F........
              lucifer/eventlib_unittest.py ....
              lucifer/leasing_unittest.py ............
              lucifer/loglib_unittest.py ..
              
              ---------- coverage: platform linux2, python 2.7.10-final-0 ----------
              Name                           Stmts   Miss Branch BrPart  Cover   Missing
              --------------------------------------------------------------------------
              lucifer/autotest.py               61     12      4      0    82%   63, 79-95, 124-129
              lucifer/autotest_unittest.py      32      1      0      0    97%   26
              lucifer/cmd/job_aborter.py        42     23     10      0    37%   52-54, 58-61, 67-72, 76-78, 82-84, 91, 96-101
              lucifer/cmd/job_reporter.py      140    103     28      0    22%   48-63, 71-77, 82-91, 109-113, 118, 122, 143-147, 150-157, 161, 166, 169-177, 180-185, 189, 192-195, 203-210, 219-231, 237-246, 250-262, 267-276, 281-285
              --------------------------------------------------------------------------
              TOTAL                            638    139     80      0    75%
              
              13 files skipped due to complete coverage.
              
              
              =========================== slowest 5 test durations ===========================
              0.12s call     lucifer/leasing_unittest.py::test_Job_abort
              0.12s call     lucifer/autotest_unittest.py::test_monkeypatch
              0.12s call     lucifer/leasing_unittest.py::test_obtain_lease_succesfully_removes_file
              0.12s call     lucifer/leasing_unittest.py::test_get_expired_leases
              0.11s call     lucifer/leasing_unittest.py::test_obtain_lease_with_error_leaves_files
              =================================== FAILURES ===================================
              _______________________________ test_monkeypatch _______________________________
              
                  @pytest.mark.slow
                  def test_monkeypatch():
                      """Test monkeypatch()."""
                      common_file = subprocess32.check_output(
                              [sys.executable, '-m',
              >                'lucifer.cmd.test.autotest_monkeypatcher'])
              
              lucifer/autotest_unittest.py:25: 
              _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
              
              popenargs = (['/home/xiaochu/.cache/cros_venv/venv-2.7.10-5e245727ed23dd962b0bb6d1c7a40f7d/bin/python', '-m', 'lucifer.cmd.test.autotest_monkeypatcher'],)
              kwargs = {}, timeout = None
              process = <subprocess32.Popen object at 0x7fd9fcd61090>, output = ''
              unused_err = None, retcode = 1
              
                  def check_output(*popenargs, **kwargs):
                      r"""Run command with arguments and return its output as a byte string.
                  
                      If the exit code was non-zero it raises a CalledProcessError.  The
                      CalledProcessError object will have the return code in the returncode
                      attribute and output in the output attribute.
                  
                      The arguments are the same as for the Popen constructor.  Example:
                  
                      >>> check_output(["ls", "-l", "/dev/null"])
                      'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'
                  
                      The stdout argument is not allowed as it is used internally.
                      To capture standard error in the result, use stderr=STDOUT.
                  
                      >>> check_output(["/bin/sh", "-c",
                      ...               "ls -l non_existent_file ; exit 0"],
                      ...              stderr=STDOUT)
                      'ls: non_existent_file: No such file or directory\n'
                      """
                      timeout = kwargs.pop('timeout', None)
                      if 'stdout' in kwargs:
                          raise ValueError('stdout argument not allowed, it will be overridden.')
                      process = Popen(stdout=PIPE, *popenargs, **kwargs)
                      try:
                          output, unused_err = process.communicate(timeout=timeout)
                      except TimeoutExpired:
                          process.kill()
                          output, unused_err = process.communicate()
                          raise TimeoutExpired(process.args, timeout, output=output)
                      retcode = process.poll()
                      if retcode:
              >           raise CalledProcessError(retcode, process.args, output=output)
              E           CalledProcessError: Command '['/home/xiaochu/.cache/cros_venv/venv-2.7.10-5e245727ed23dd962b0bb6d1c7a40f7d/bin/python', '-m', 'lucifer.cmd.test.autotest_monkeypatcher']' returned non-zero exit status 1
              
              /home/xiaochu/.cache/cros_venv/venv-2.7.10-5e245727ed23dd962b0bb6d1c7a40f7d/lib/python2.7/site-packages/subprocess32.py:638: CalledProcessError
              ----------------------------- Captured stderr call -----------------------------
              autotest_monkeypatcher: 2017-11-20 12:04:55,721:ERROR:autotest:_global_setup:60:Uncaught exception escaped Autotest setup
              Traceback (most recent call last):
                File "lucifer/autotest.py", line 56, in _global_setup
                  yield
                File "lucifer/autotest.py", line 47, in monkeypatch
                  _monkeypatch_body()
                File "lucifer/autotest.py", line 76, in _monkeypatch_body
                  importlib.import_module('chromite')
                File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
                  __import__(name)
              ImportError: No module named chromite
              ===================== 1 failed, 26 passed in 1.14 seconds ======================
              

Preupload failed due to errors in project(s). HINTS:
- To disable some source style checks, and for other hints, see <checkout_dir>/src/repohooks/README

 
Cc: ayatane@chromium.org
Run utils/build_externals.py
Running utils/build_externals.py helped it somewhat. Now the error is different:
        Errors:
            * Hook script "./utils/run_pylint.py" failed with code 1:
              Traceback (most recent call last):
                File "./utils/run_pylint.py", line 57, in <module>
                  import pylint.lint
                File "/mnt/host/source/src/third_party/autotest/files/site-packages/pylint/lint.py", line 43, in <module>
                  import astroid
                File "/mnt/host/source/src/third_party/autotest/files/site-packages/astroid/__init__.py", line 57, in <module>
                  from astroid.nodes import *
                File "/mnt/host/source/src/third_party/autotest/files/site-packages/astroid/nodes.py", line 30, in <module>
                  from astroid.node_classes import (
                File "/mnt/host/source/src/third_party/autotest/files/site-packages/astroid/node_classes.py", line 26, in <module>
                  from astroid import decorators
                File "/mnt/host/source/src/third_party/autotest/files/site-packages/astroid/decorators.py", line 12, in <module>
                  import wrapt
              ImportError: No module named wrapt
Btw, do we even need to run 'test_lucifer' as a part of pre-submit checks for all changes in all test scripts?
Honest question, I don't know what this script actually tests.
#3 depends on what you mean by "need".  Those are unit tests for the Autotest rewrite (lucifer).  Yes in the sense that all changes to Autotest code need to be tested, no in the sense that currently, most changes won't break the tests.

Note that #2 is completely unrelated to test_lucifer.

Try removing site-packages completely and rerunning build_externals?  build_externals should be installing wrapt, but it looks like astroid/pylint can't find it.
Forgot to add, make sure you're synced up fully to ToT first.
Re #4, #5. I did (repo sync + remove site-packages + build_externals), but it didn't help. Still the same issue.

The build_externals log is attached. It does seem to build wrapt. It also contains a warning about installing django on top of existing installation, but I doubt it's related.

build_externals.log
2.1 MB View Download
I don't really have any particular ideas.  Could you try debugging it with pdb and/or an interactive Python session?

Does /mnt/host/source/src/third_party/autotest/files/site-packages/wrapt exist?

Does it work outside the chroot?  Make sure you run a fresh build_externals when switching between inside and outside the chroot.
As a quick note in re to #7 (the rest tbd): /mnt/host/source/src/third_party/autotest/files/site-packages/wrapt doesn't exist.
Possibly related: https://bugs.chromium.org/p/chromium/issues/detail?id=800159

You are on Rodete?
Blockedon: 800159
I was on trusty, since then migrated to rodete.
1) Cleared files/site-packages and re-ran utils/build_externals.py from inside chroot on rodete. Still the same issue and no site-packages/wrapt.
2) Installed libmariadbclient-dev-compat and re-ran utils/build_externals.py from outside chroot. The problem went away.
As a side note (in cont to comments #3, #4): I still believe the pre-submit hooks are trying to do too much in doing the following: 

1) Running the (unrelated in this case) unit tests. We don't do it as a part of pre-submit hooks for other packages. And those autotests run for a specific $BOARD only, if they fail for some other $BOARD we won't catch it anyways.

If we want to run lucifer unit tests when we submit changes to lucifer itself, is it possible to skip them when the changelist contains only the test scripts themselves, i.e. only files in server/site_tests or client/site_tests? Changes in test scripts won't affect lucifer unit tests anyways, right?

2) Doing some other checks. I work with several board types in parallel and have several chroots open.

When I add a new autotest and run 'repo upload' from chroot session, where I don't have the autotest packages in cros_workon list,  I get "No ebuild entry for platform_InitLoginPerfServer. To fix, please do the following: 1. Add your new test to one of the ebuilds referenced by autotest-all. 2. cros_workon --board=<board> start <your_ebuild>. 3. emerge-<board> <your_ebuild>". 

And I prefer having my chroot for a specific board sitting in src/scripts (to simplify calling various scripts), while separate sessions with BOARD unset or set to x86-generic by default sitting inside repos I'm working on (for repo/git ops). Not hard to switch back to the right chroot and cd to the right path there, but feels like an unnecessary nuisance.

Iiuc, running some of these checks is what makes 'repo upload' for autotests require to have virtualenv and from outside chroot fail due to "cros_venv.venvlib.VirtualenvMissingError: virtualenv is not installed (caused by [Errno 2] No such file or directory)".

I'm sure it can be set up to work outside chroot, but it will fail anyways due to the 2nd item in the list above anyways, so I don't even bother.
1) I don't think there's a mechanism for excluding specific directories from pre-upload hooks. We could instead enable it for every affected directory, but beyond just being a hassle, I believe it doesn't work if a change spans multiple such directories.

2. The unittests shouldn't care what board type you're using nor whether you're in a chroot.  All you need is virtualenv, which the error message states.  I don't know anything about a pre-upload hook that requires ebuild entries for tests.
Cc: akes...@chromium.org vapier@chromium.org
 Issue 805053  has been merged into this issue.
 Issue 805745  has been merged into this issue.
Is there any chance of seeing a change here? I recently ran into this as well when trying to upload a change to autotests. There's definitely too much magic here.

For what it's worth, finding this bug involved
1) Searching chromium-os-dev for related issues ( https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-os-dev/%22no$20module$20named$20chromite%22%7Csort:date )
2) Finding https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-os-dev/%22no$20module$20named$20chromite%22%7Csort:date/chromium-os-dev/DEVJEZ4-laI/yTmK0qXWCwAJ , as the higher ranked thread - https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-os-dev/%22no$20module$20named$20chromite%22%7Csort:date/chromium-os-dev/yE3C44Yy6lA/jRCDK_SoAgAJ - doesn't mention it
3) Reading through to get to comment #11 and realizing I need to 'repo upload' from the Chroot (which doesn't seem to be required, from https://chromium.googlesource.com/chromiumos/docs/+/master/developer_guide.md#create-a-branch-for-your-changes and https://chromium.googlesource.com/chromiumos/docs/+/master/developer_guide.md#upload-your-changes-and-get-a-code-review )

And then finding it still doesn't work, because               ERROR: could not load /mnt/host/source/src/third_party/autotest/files/venv/lucifer/conftest.py


So instead, my solution is to bypass hooks when uploading.

Components: Infra>Client>ChromeOS>Test
Summary: autotest preupload hook always fails: test_lucifer: test_monkeypatch: ImportError: No module named chromite (was: autotest CL presubmit hook ERROR)
i'm just going to mark it skipped until someone takes a look:
  https://chromium-review.googlesource.com/1194491
Project Member

Comment 18 by bugdroid1@chromium.org, Aug 28

The following revision refers to this bug:
  https://chromium.googlesource.com/chromiumos/third_party/autotest/+/a32e816e2cd816badb37f86c58687741a5949398

commit a32e816e2cd816badb37f86c58687741a5949398
Author: Mike Frysinger <vapier@chromium.org>
Date: Tue Aug 28 21:39:51 2018

lucifer: skip failing test

This is failing for a lot of people, so skip it.

BUG= chromium:787081 
TEST=`repo upload` doesn't abort anymore

Change-Id: Ie0af35c063fc32d12a966786a710b99cb35c223d
Reviewed-on: https://chromium-review.googlesource.com/1194491
Commit-Ready: Mike Frysinger <vapier@chromium.org>
Tested-by: Mike Frysinger <vapier@chromium.org>
Reviewed-by: Allen Li <ayatane@chromium.org>
Reviewed-by: Xiaochu Liu <xiaochu@chromium.org>

[modify] https://crrev.com/a32e816e2cd816badb37f86c58687741a5949398/venv/lucifer/autotest_unittest.py

Owner: ayatane@chromium.org
Status: Assigned (was: Untriaged)
Passing the buck to ayatane@, on the theory that he can either close it,
or hold it until the test can be re-enabled.

Status: WontFix (was: Assigned)
Eh, don't care, we're going to delete Autotest

Sign in to add a comment