New issue
Advanced search Search tips

Issue 645569 link

Starred by 1 user

Issue metadata

Status: Archived
Owner: ----
Closed: Jan 2017
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

76.1% regression in cc_perftests at 417144:417187

Project Member Reported by rsch...@chromium.org, Sep 9 2016

Issue description

See the link to graphs below.
 
All graphs for this bug:
  https://chromeperf.appspot.com/group_report?bug_id=645569

Original alerts at time of bug-filing:
  https://chromeperf.appspot.com/group_report?keys=agxzfmNocm9tZXBlcmZyFAsSB0Fub21hbHkYgICgyb7T8AsM


Bot(s) for this bug's original alert(s):

linux-release
Project Member

Comment 3 by 42576172...@developer.gserviceaccount.com, Sep 10 2016

Cc: vmp...@chromium.org
Owner: vmp...@chromium.org

=== Auto-CCing suspected CL author vmpstr@chromium.org ===

Hi vmpstr@chromium.org, the bisect results pointed to your CL below as possibly
causing a regression. Please have a look at this info and see whether
your CL be related.


===== BISECT JOB RESULTS =====
Status: completed


===== SUSPECTED CL(s) =====
Subject : cc: Change preraster/predecode split to be half of max skewport extent.
Author  : vmpstr
Commit description:
  
This patch iterates on the raster/decode split to include skewport into
consideration. Specifically, in addition to limiting preraster to 1000
pixels, we also limit it by half of the skewport extent, where skewport
extent is the largest padding in any direction away from the viewport.

The tough scrolling cases seems to indicate that checkerboards
remain roughly the same with this patch.

R=enne
CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel

Review-Url: https://codereview.chromium.org/2317913002
Cr-Commit-Position: refs/heads/master@{#417164}
Commit  : 1a4f3af8e0b96d1517939be11395132f9faa5e8c
Date    : Thu Sep 08 02:16:50 2016


===== TESTED REVISIONS =====
Revision         Mean     Std Dev  N  Good?
chromium@417143  21524.1  198.396  5  good
chromium@417154  21194.2  358.538  5  good
chromium@417160  21655.1  156.996  5  good
chromium@417163  21277.2  204.116  5  good
chromium@417164  5041.49  97.1545  5  bad    <--
chromium@417165  5126.62  52.2606  5  bad
chromium@417187  5145.02  19.6748  5  bad

Bisect job ran on: linux_perf_bisect
Bug ID: 645569

Test Command: ./src/out/Release/cc_perftests --test-launcher-print-test-stdio=always --verbose
Test Metric: prepare_tiles/2_100
Relative Change: 76.10%
Score: 99.9

Buildbot stdio: http://build.chromium.org/p/tryserver.chromium.perf/builders/linux_perf_bisect/builds/6702
Job details: https://chromeperf.appspot.com/buildbucket_job_status/9001971241224527888


Not what you expected? We'll investigate and get back to you!
  https://chromeperf.appspot.com/bad_bisect?try_job_id=5787146395320320

| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Tests>AutoBisect.  Thank you!
Components: Internals>GPU
vmpstr, seems your patch caused a pretty significant regression. Was this expected?

Comment 5 by vmp...@chromium.org, Sep 12 2016

Cc: enne@chromium.org
I don't think this is expected. I'll take a look.

Comment 6 by vmp...@chromium.org, Sep 13 2016

This is caused by the fact that we effectively reduce our prepaint distance. What happens is that before the patch, we'd schedule 32 tiles to rasterize and break out of the prepare tiles loop.

However, with the new approach, we get 30 tiles to rasterize and the rest of the tiles are deemed as images only. Since 30 is not enough to break out of the loop (it's less than the scheduled_raster_task_limit_), we proceed to grab 212 tiles to process for images.

I'm not sure whether we need to fix this or not, since the idea is to allow the predecode distance to increase as much as it needs to. Note that out of those 212 tiles, none have images (since the test doesn't have images), but we don't know that at this stage in the pipeline.

Moreover, we process all of those tiles to determine solid color, which maybe we can eliminate for these tiles.

I'm hesitant to just cap the number of tiles here by a fixed number, since that would effectively limit how far we predecode. To put differently, since these tiles will remain unmodified (only processed for images), the future calls to assign gpu memory will keep getting these tiles, so we won't actually be progressing further in the image decodes.

enne@, what do you think?

Comment 7 by vmp...@chromium.org, Sep 13 2016

Components: -Internals>GPU Internals>Compositing>Images Internals>Compositing>Rasterization

Comment 8 by enne@chromium.org, Sep 14 2016

This seems like something that would happen in practice, though.  In terms of continually re-looking through tiles for images?

Do you think it's worth considering the end of the preraster distance as a breakpoint for scheduling?
Project Member

Comment 9 by bugdroid1@chromium.org, Sep 14 2016

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/3e9481d0d0a80567a1f16c10f59058377048bd03

commit 3e9481d0d0a80567a1f16c10f59058377048bd03
Author: vmpstr <vmpstr@chromium.org>
Date: Wed Sep 14 20:40:59 2016

cc: Mark tiles as processed for solid color analysis.

This patch ensures we don't process tiles for solid color analysis
many times. This wasn't as much of an issue before we separated raster
and decode images, since we'd only process a tile once. Now, we can
visit tiles more than once.

This shows pretty significant gains on perftests.
Before:
*RESULT prepare_tiles: 2_100= 2236.82763671875 runs/s
*RESULT prepare_tiles: 2_500= 8168.21533203125 runs/s
*RESULT prepare_tiles: 2_1000= 10103.5654296875 runs/s
*RESULT prepare_tiles: 10_100= 8856.08984375 runs/s
*RESULT prepare_tiles: 10_500= 8581.669921875 runs/s
*RESULT prepare_tiles: 10_1000= 8932.677734375 runs/s
*RESULT prepare_tiles: 50_100= 3708.318359375 runs/s
*RESULT prepare_tiles: 50_500= 3758.5810546875 runs/s
*RESULT prepare_tiles: 50_1000= 3686.30078125 runs/s

After:
*RESULT prepare_tiles: 2_100= 9835.1611328125 runs/s
*RESULT prepare_tiles: 2_500= 20325.25390625 runs/s
*RESULT prepare_tiles: 2_1000= 21301.900390625 runs/s
*RESULT prepare_tiles: 10_100= 16529.826171875 runs/s
*RESULT prepare_tiles: 10_500= 16585.15625 runs/s
*RESULT prepare_tiles: 10_1000= 16813.125 runs/s
*RESULT prepare_tiles: 50_100= 4764.2998046875 runs/s
*RESULT prepare_tiles: 50_500= 4804.51220703125 runs/s
*RESULT prepare_tiles: 50_1000= 4789.98486328125 runs/s

However, I'll also be changing the tests to actually reset this state
when I change the perftests to test specific scenarios.

R=enne
BUG= 645569 
CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel

Review-Url: https://codereview.chromium.org/2334003007
Cr-Commit-Position: refs/heads/master@{#418662}

[modify] https://crrev.com/3e9481d0d0a80567a1f16c10f59058377048bd03/cc/tiles/tile.cc
[modify] https://crrev.com/3e9481d0d0a80567a1f16c10f59058377048bd03/cc/tiles/tile.h
[modify] https://crrev.com/3e9481d0d0a80567a1f16c10f59058377048bd03/cc/tiles/tile_manager.cc

I think the cc_perftest regression is recovered enough for us to be able to close this bug. However, memory.top_10_mobile is also attached to this bug and I'm unsure if it's the same issue:

rschoen@, Is it possible to bisect one of these regressions?
Project Member

Comment 13 by 42576172...@developer.gserviceaccount.com, Sep 24 2016


===== BISECT JOB RESULTS =====
Status: failed


===== TESTED REVISIONS =====
Revision         Mean     Std Dev  N  Good?
chromium@417087  8116469  53233.4  5  good
chromium@417140  8141209  37027.2  5  good
chromium@417153  8134492  31522.7  5  good
chromium@417160  8134328  31202.5  5  good
chromium@417163  8124662  15697.8  5  good
chromium@417165  8960901  28786.1  5  bad
chromium@417166  8967782  42139.0  5  bad
chromium@417192  8964997  28786.1  5  bad
chromium@417296  8964997  28786.1  5  bad

Bisect job ran on: android_nexus5_perf_bisect
Bug ID: 645569

Test Command: src/tools/perf/run_benchmark -v --browser=android-chromium --output-format=chartjson --upload-results --also-run-disabled-tests memory.top_10_mobile
Test Metric: memory:chrome:all_processes:reported_by_os:system_memory:ashmem:proportional_resident_size_avg/foreground/https_mobile_twitter_com_justinbieber_skip_interstitial_true
Relative Change: 10.45%
Score: 0

Buildbot stdio: http://build.chromium.org/p/tryserver.chromium.perf/builders/android_nexus5_perf_bisect/builds/4144
Job details: https://chromeperf.appspot.com/buildbucket_job_status/9000698202069617008


Not what you expected? We'll investigate and get back to you!
  https://chromeperf.appspot.com/bad_bisect?try_job_id=6351792516890624

| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Tests>AutoBisect.  Thank you!
Project Member

Comment 14 by 42576172...@developer.gserviceaccount.com, Sep 24 2016


===== BISECT JOB RESULTS =====
Status: failed


===== TESTED REVISIONS =====
Revision         Mean     Std Dev  N  Good?
chromium@417087  2870067  1044.32  5  good
chromium@417140  2837708  289.56   5  good
chromium@417153  2837954  3297.41  5  good
chromium@417160  2838036  534.23   5  good
chromium@417163  2841559  849.398  5  good
chromium@417166  3549183  868.916  5  bad
chromium@417192  3552952  888.075  5  bad
chromium@417296  3556392  849.326  5  bad

Bisect job ran on: android_one_perf_bisect
Bug ID: 645569

Test Command: src/tools/perf/run_benchmark -v --browser=android-chromium --output-format=chartjson --upload-results --also-run-disabled-tests memory.top_10_mobile
Test Metric: memory:chrome:all_processes:reported_by_os:system_memory:ashmem:proportional_resident_size_avg/foreground/http_m_youtube_com_results_q_science
Relative Change: 23.91%
Score: 0

Buildbot stdio: http://build.chromium.org/p/tryserver.chromium.perf/builders/android_one_perf_bisect/builds/1637
Job details: https://chromeperf.appspot.com/buildbucket_job_status/9000698221278489904


Not what you expected? We'll investigate and get back to you!
  https://chromeperf.appspot.com/bad_bisect?try_job_id=6423439953362944

| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Tests>AutoBisect.  Thank you!
Owner: rsch...@chromium.org
Both those bisects are pointing at different patches. I think the patch in #9 fixes the cc_perftest problems. Assigning back to rschoen@ to triage the remaining issues.
Owner: ----
Perf sheriff fixit: removing owners from bugs owned by the sheriffs who filed them to clarify that the rotation triages the bugs.
Status: Untriaged (was: Assigned)
Cc: majidvp@chromium.org
I agree that the android memory issue seems unrelated (I created  issue 672601  to track that).

Thanks for addressing the cc_preftest issue. The graph has indeed recovered although not fully. There is still some regression even after #9 patch. vmpstr@, enne@ are you happy with this result?
Status: Archived (was: Untriaged)
Archiving as bisection is difficult and there's been no movement.

Sign in to add a comment