New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 843107 link

Starred by 2 users

Issue metadata

Status: Assigned
Owner:
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

107.3% regression in browser_tests at 555973:555982

Project Member Reported by henrika@chromium.org, May 15 2018

Issue description

See the link to graphs below.
 
Project Member

Comment 1 by 42576172...@developer.gserviceaccount.com, May 15 2018

All graphs for this bug:
  https://chromeperf.appspot.com/group_report?bug_id=843107

(For debugging:) Original alerts at time of bug-filing:
  https://chromeperf.appspot.com/group_report?sid=bb08df6576e8d0afb8c30ba47a8bdad6c308fcf77fe9c7a26fafca786e4c254f


Bot(s) for this bug's original alert(s):

chromium-webrtc-rel-win8
Cc: flim@chromium.org minyue@chromium.org gustaf@chromium.org
I received (as perf sheriff) a list of ~10 different tests that failed within a similar range.
I did not file a separate issue per platform since all issues were related to opus tests.
Assumed that the regression must come from changes in Opus (but I can be wrong).

Comment 7 by gustaf@chromium.org, May 15 2018

The last commit to Opus was 2018-04-04 when Fabrice de Gans-Riberi cleaned up the build files. But this regression appears one month later.
But, take this one as an example:

https://chromeperf.appspot.com/group_report?sid=bb08df6576e8d0afb8c30ba47a8bdad6c308fcf77fe9c7a26fafca786e4c254f

The new "bad state" was a "good state" back in 2017-08 range. Hence, not sure if the "dip" in between was bad or not.
Cc: hlundin@chromium.org
Owner: dalecur...@chromium.org
I bisected this locally (linux desktop), and I think the offending CL is https://chromium-review.googlesource.com/1036524 (dalecurtis@).

Repro-steps (in a Chrome checkout):
1. $ git checkout 6af47b8e242449b5e2f8b888f34d287807c9dfc4 # OFFENDING CL

2. $ gn args out/Default
  # These are my default settings; the issue might repro with other settings too.
  is_debug = false
  is_component_build = true
  dcheck_always_on = true
  use_goma = true
  is_chrome_branded = true

3. $ gclient sync && ninja -j1024 -C out/Default browser_tests && out/Default/browser_tests --gtest_filter=WebRtcInternalsPerfBrowserTest.MANUAL_RunsOneWayCall60SecsAndLogsInternalMetricsWithOpusDtx --run-manual --ui-test-action-max-timeout=350000 --test-launcher-jobs=1 --test-launcher-bot-mode --test-launcher-print-test-stdio=always | grep 'audio_tx_sendonly_with_opus_dtx: packets_sent_per_second'

RESULT audio_tx_sendonly_with_opus_dtx: packets_sent_per_second= [0,32.520325203252035,32.032032032032035,32,31.96803196803197,32,32,32,32,32.032032032032035] packets <--- Note that all numbers are above 30 (except the first one which is always 0).

4. $ git checkout HEAD^ # ONE BEFORE THE OFFENDER

5. $ gclient sync && ninja -j1024 -C out/Default browser_tests && out/Default/browser_tests --gtest_filter=WebRtcInternalsPerfBrowserTest.MANUAL_RunsOneWayCall60SecsAndLogsInternalMetricsWithOpusDtx --run-manual --ui-test-action-max-timeout=350000 --test-launcher-jobs=1 --test-launcher-bot-mode --test-launcher-print-test-stdio=always | grep 'audio_tx_sendonly_with_opus_dtx: packets_sent_per_second'

RESULT audio_tx_sendonly_with_opus_dtx: packets_sent_per_second= [0,26.36916835699797,25.974025974025974,26.026026026026027,26,26,26,25.974025974025974,26.026026026026027,26] packets  <--- Note that all numbers are below 30.


I'm not sure that this is a change for the worse, it could very well be something positive. However, I would like to know why it happens, in particular since the candidate offender is titled "Remove dead bits_per_sample() code from AudioParameters" indicating that it should not affect any live code.

dalecurtis@, can you shed some light on this? Thanks!

Owner: hlundin@chromium.org
The interleaving method is changed slightly in a way that might allow for smarter compiler optimizations. Previously it was a switch() conditional based on bps, but now is templated to int16_t for many cases.

It's also possible something was specifying an invalid or lower bits_per_sample value somehow in the past and is now always hard-coded to 16-bit. I looked through the CL again and don't see anything that seems like it would explicitly have any impact.

If you identify the code path which is generating these packets and it has a To/From interleaved call in it, you can switch it back to the old style to see if the perf returns to the previous values.

Sign in to add a comment