New issue
Advanced search Search tips

Issue 846895 link

Starred by 3 users

Issue metadata

Status: Available
Owner: ----
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 3
Type: Bug
okr



Sign in to add a comment

make each autotest return only pass or fail (with annotation) and a timeout enforced by the harness.

Project Member Reported by ihf@chromium.org, May 25 2018

Issue description

Right now an autotest test passes when it does not raise an exception.
It fails when it raises any exception.
Except sometimes when it raises a special exception like WARN or TEST_NA, which sometimes is not a failure, but that depends on the caller and dashboard and generally is nowhere in agreement.

This is subtle and gets messier. One problem is the use of exceptions, which are then serialized (client tests) and re-interpreted from the logs.

Sometimes the test writer using TEST_NA or WARN wants to punt the decision on PASS/FAIL to the result reader, because there is context missing to the test (which is bad and should be fixed in the test). But generally a writer would have just wanted not to alert sheriffs immediately to a failure, just examine unusual (but passing) situations on a dashboard when there was spare time (non-urgent, minor issues).

Proposal:
0) Don't use exceptions.
1) It is better to force autotest writers to categorize each return path as PASS or FAIL.
2) It should be possible to annotate both PASS and FAIL with a string (not just FAIL as it is now).
3) Maybe it should be possible to annotate each PASS and FAIL with an individual color for dashboards like stainless/. (Or maybe stainless should hash different strings into slightly different shades of green/red.)
4) Any python exception escaping a test should be converted into a very simple failure (encouraging test writers to handle/annotate them, e.g. annotate with "Please fix your test").
5) Each test should also be able to set a test timeout in the control file (I think this is not cleanly handled right now). This timeout should be enforced by the harness, not the test. A test can of course have its own timeouts, but should return a an annotated failure.
6) All other python error/exceptions (like TEST_NA) should be left to autotest harness/scheduling internals, and not be available to tests themselves.

Benefits:
1) Less ambiguity: the test writer decides what is PASS or FAIL. The result reader does not need to interpret if a WARN or a TEST_NA is a PASS or a FAIL.
2) Better mapping when interfacing with external dashboards (like sponge).
3) Better control over text annotation and (suggested) dashboard colors.
 
 

Comment 1 by ihf@chromium.org, May 25 2018

Notice I had a bad POC change here at some point using exceptions.
https://chromium-review.googlesource.com/#/c/chromiumos/third_party/autotest/+/404170/

I don't think this would be a huge amount of work inside of autotest proper, if one can ask the test writers for help adjusting their tests.

Comment 2 by derat@chromium.org, May 26 2018

We've talked about this a bit before, but I still don't fully understand the reasons for wanting to annotate passing results. Is the main use of this to be able to annotate cases where the test "passed" by virtue of being skipped because it had dependencies that weren't satisfied by the DUT? Otherwise, it feels like the annotation is usually going to be "the thing that I was testing worked", which doesn't feel like it provides any information beyond what's already in the test name and description.
Cc: akes...@chromium.org
Labels: okr
Owner: ----
Status: Available (was: Untriaged)
Added this to roadmap, will revisit for ownership next quarter.

Comment 4 by ihf@chromium.org, Jun 5 2018

Re #2: it is not just a "skip" that I would like to annotate. I would like to have separate strings for each return path of a test. (Each return path must either be annotated "pass" or "fail" by a test writer.) And I would like to learn more about unusual "passes". They often contain information.

Sign in to add a comment