New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 698427 link

Starred by 1 user

Issue metadata

Status: Assigned
Owner:
OOO until 2019-01-24
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 3
Type: Bug



Sign in to add a comment

content/test/gpu/run_unittests.py failures

Project Member Reported by danakj@chromium.org, Mar 3 2017

Issue description

[4/5] unittest_data.integration_tests.SimpleTest.unexpected_error failed unexpectedly 0.0006s:
  Running error.html
  
  Traceback (most recent call last):
    _RunGpuTest at content/test/gpu/gpu_tests/gpu_integration_test.py:73
      self.RunActualGpuTest(url, *args)
    RunActualGpuTest at content/test/gpu/unittest_data/integration_tests.py:100
      raise Exception('Expected exception')
  Exception: Expected exception
  
  Locals:
    args      : ()
    file_path : 'error.html'
  
  Restarting browser due to unexpected test failure
  Starting browser, attempt 1 of 3
  Started browser successfully.
  Traceback (most recent call last):
    File "/usr/local/google/home/danakj/s/c/src/third_party/catapult/telemetry/telemetry/testing/serially_executed_browser_test_case.py", line 194, in <lambda>
      return lambda self: based_method(self, *args)
    File "/usr/local/google/home/danakj/s/c/src/content/test/gpu/gpu_tests/gpu_integration_test.py", line 73, in _RunGpuTest
      self.RunActualGpuTest(url, *args)
    File "/usr/local/google/home/danakj/s/c/src/content/test/gpu/unittest_data/integration_tests.py", line 100, in RunActualGpuTest
      raise Exception('Expected exception')
  Exception: Expected exception
[5/5] unittest_data.integration_tests.SimpleTest.unexpected_failure failed unexpectedly 0.0006s:
  Running failure.html
  
  Traceback (most recent call last):
    _RunGpuTest at content/test/gpu/gpu_tests/gpu_integration_test.py:73
      self.RunActualGpuTest(url, *args)
    RunActualGpuTest at content/test/gpu/unittest_data/integration_tests.py:94
      self.fail('Expected failure')
    fail at /usr/lib/python2.7/unittest/case.py:412
      raise self.failureException(msg)
  AssertionError: Expected failure
  
  Locals:
    msg : 'Expected failure'
  
  Restarting browser due to unexpected test failure
  Starting browser, attempt 1 of 3
  Started browser successfully.
  Traceback (most recent call last):
    File "/usr/local/google/home/danakj/s/c/src/third_party/catapult/telemetry/telemetry/testing/serially_executed_browser_test_case.py", line 194, in <lambda>
      return lambda self: based_method(self, *args)
    File "/usr/local/google/home/danakj/s/c/src/content/test/gpu/gpu_tests/gpu_integration_test.py", line 73, in _RunGpuTest
      self.RunActualGpuTest(url, *args)
    File "/usr/local/google/home/danakj/s/c/src/content/test/gpu/unittest_data/integration_tests.py", line 94, in RunActualGpuTest
      self.fail('Expected failure')
  AssertionError: Expected failure
5 tests run in 0.1s, 2 failures.


It's hard to tell if this expected failures means it's passing or what. Maybe so, cuz the very end says 33 tests run in 8.3s, 0 failures. But I'm not sure?
 
Components: Internals>GPU>Testing
Status: Assigned (was: Untriaged)

Comment 2 by kbr@chromium.org, Mar 6 2017

Cc: dpranke@chromium.org nedngu...@google.com
Components: Tests>Telemetry
Labels: Build-Tools-TYP
Sorry about the confusion. Some of the sub-tests invoke a test runner that runs tests that are supposed to fail, and which spews output.

Dirk, Ned, typ by default prints output if tests fail. Is there any way to override that behavior? I don't see any options in browser_test_runner that would be passed to typ.

Ken: I think this probably can be fixed by matching a commandline argument from browser_test_runner to the typ runner that allow suppressing the stdout.

The related code is in https://github.com/catapult-project/catapult/blob/master/telemetry/telemetry/testing/run_browser_tests.py#L297


No, there's no way to suppress output in typ from tests that are supposed to fail.

I'm happy to think about how to add that but it's not immediately obvious what the right interface would look like. Frankly, the idea is even a little weird :).
Making it more clear when u run the tests that things are working correctly would be nice. From reading it, it looks like it ran multiple suites and some failed along the way, but the last one passed.
Components: Test>Telemetry
Components: -Tests>Telemetry

Sign in to add a comment