New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 672601 link

Starred by 2 users

Issue metadata

Status: WontFix
Owner:
Closed: Aug 2017
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

3%-48.5% regression in memory.top_10_mobile at 417088:417296

Project Member Reported by majidvp@google.com, Dec 8 2016

Issue description

Splitting these regression from original bug ( issue 645569 ) as they seem to be unrelated. The other cc_perftest regression has been addressed.
 

===== BISECT JOB RESULTS =====
Status: completed


===== SUSPECTED CL(s) =====
Subject : cc: Change preraster/predecode split to be half of max skewport extent.
Author  : vmpstr
Commit description:
  
This patch iterates on the raster/decode split to include skewport into
consideration. Specifically, in addition to limiting preraster to 1000
pixels, we also limit it by half of the skewport extent, where skewport
extent is the largest padding in any direction away from the viewport.

The tough scrolling cases seems to indicate that checkerboards
remain roughly the same with this patch.

R=enne
CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel

Review-Url: https://codereview.chromium.org/2317913002
Cr-Commit-Position: refs/heads/master@{#417164}
Commit  : 1a4f3af8e0b96d1517939be11395132f9faa5e8c
Date    : Thu Sep 08 02:16:50 2016


===== TESTED REVISIONS =====
Revision         Mean     Std Dev  N  Good?
chromium@417087  1740288  46589.2  8  good
chromium@417140  1721088  44676.2  8  good
chromium@417153  1736704  0.0      5  good
chromium@417160  1713766  25645.0  5  good
chromium@417163  1725184  38634.8  8  good
chromium@417164  2591130  25645.0  5  bad    <--
chromium@417165  2585395  31408.6  5  bad
chromium@417166  2585395  31408.6  5  bad
chromium@417192  2583757  31408.6  5  bad
chromium@417296  2603418  25645.0  5  bad

Bisect job ran on: android_one_perf_bisect
Bug ID: 672601

Test Command: src/tools/perf/run_benchmark -v --browser=android-chromium --output-format=chartjson --upload-results --pageset-repeat=1 --also-run-disabled-tests --story-filter=http.m.intl.taobao.com.group.purchase.html memory.top_10_mobile
Test Metric: memory:chrome:all_processes:reported_by_os:system_memory:ashmem:proportional_resident_size_avg/foreground/http_m_intl_taobao_com_group_purchase_html
Relative Change: 49.60%

Buildbot stdio: http://build.chromium.org/p/tryserver.chromium.perf/builders/android_one_perf_bisect/builds/1866
Job details: https://chromeperf.appspot.com/buildbucket_job_status/8993811277786744928


Not what you expected? We'll investigate and get back to you!
  https://chromeperf.appspot.com/bad_bisect?try_job_id=5853695399428096

| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Tests>AutoBisect.  Thank you!
Cc: perezju@chromium.org
Owner: vmp...@chromium.org
Status: Assigned (was: Untriaged)
This patch seems to have regressed some memory benchmarks pretty significantly*. Is there any way to avoid that?


*perezju@ are these actually significant?
Cc: primiano@chromium.org picksi@chromium.org
Yes, I think the regression is significant, around the 800KiB in some of the pages.

If the regression comes from a trade off, e.g. for better performance, then we should be fine. But I would like to see that we understand why the memory regression occurred in the first place and that this is indeed an intended trade off.

Comment 7 by picksi@chromium.org, Dec 13 2016

Cc: benhenry@chromium.org
A number of these regressions are reported on Android One. If this is flashed as a Svelte device then memory for speed trade-offs are not desirable. For Svelte devices (i.e. 512MB, soon to be 1GB devices) memory reduction is always preferable to speed improvements. Is it possible to disable these performance improvements on Svelte devices?

Comment 8 by vmp...@chromium.org, Dec 13 2016

Cc: enne@chromium.org danakj@chromium.org vmi...@chromium.org
I feel like this is a bit misleading since the regression is limited to the ashmem discardable memory. The patch in question trades off the memory we use for gpu tiles (preraster) for memory used exclusively for images (discardable). 

If you take a look at these graphs:
https://chromeperf.appspot.com/report?sid=15a067a28aadb54cabfec1bdeee0a6dab29ee7614ff7f058fd379c1c1b5d5e70&start_rev=416487&end_rev=417860

Then you can see, that we regress discardable by about 467,809.14 or 3.53%.
However, at the same time, gpu memory as reported by chrome improves by about 489,792.79 or 7.99% (it's a noisy graph so it kind of depends where you measure). Effective avg size as reported by chrome also improves by about 2,386,522.80 or 3.66%.

Basically, it was the intent of this patch to lock images in memory (discardable) and as a result to allow us not to use gpu memory to rasterize the tiles (gpu memory), so I'm not surprised by the results. 

I don't really have a sense of which memory is more important to conserve. Is there some overall ranking? I'm a bit hesitant to simply revert the patch, since then other graphs will show a regression as well.

What do you think is the best course of action here?

Comment 9 by vmp...@chromium.org, Dec 13 2016

Cc: ericrk@chromium.org
ericrk@ also pointed out that the gpu memory isn't accounting for all improvements, so here's another graph for cc memory exclusively:

https://chromeperf.appspot.com/report?sid=53ddfb618f4180024bd994f61bfa3fc026b2a8bdcde7a6fb42d008441193b4a0&start_rev=416487&end_rev=417860

It shows an improvement of about 1,649,738.65 or 9.20%


If we are trading off gpu memory for ashmem that's definitely good.
ericrk/vmpstr@ let me see if we get this right, are you saying that we are moving mem from the gpu driver to ashmem? What happened on the N5 where we have memtrack?

Context for picksi/benhenry:
Unfortunately the data here is a bit misleading (with fault on our side) on svelte.
The situation there is the following:
 - We used to be able to track gpu memory on svelte when we were using N4 devices.
 - Then Android peeps deprecated N4 svelte so we switched to Android One @ Android L.
 - memtrack (the system API that gives us graphics memory) is busted on Android One @ L (it's an android bug).
 - We tried to upgrade all the Android ones to M (where we verified memtrack is fixed) and it was a disaster (wanna laugh, because of sdcards... Issue 661474) so we had to roll-back
 - Now we are in the process of re-flashing to M, but not there yet

In summary, these folks here might have caused a good movement (gpu -> ashmem) but we are failing to show it because of the aforementioned reasons.
If we can get confidence from the N5 device that this is the case I think we can make a case for the Android sys health council, that would be quite easy to explain.
Hmm, I don't see any movement in the reported_by_os:gpu_memory on n5. Is there another category that would capture this? I do see similar behavior there as in comment #8-#9:

https://chromeperf.appspot.com/report?sid=8b8f0b2516ae358a6e64972a3f27fcf1dff76cb869560d3b66904717d2ebe5fe&start_rev=416487&end_rev=417860

That is, the ashmem goes up, but both cc memory as well as the total reported by chrome memory goes down.

To answer your question, basically yes this patch reduces the amount of raster we do (saving gpu memory), but it trades that off by locking images in discardable memory (regressing ashmem). In general, I would expect that the gpu savings would be higher than the regression in ashmem, as is the case in these examples.
re #10: Is Ashmem counted as part of PSS? Do we/should we flush Ashmem before measuring memory use?
> Hmm, I don't see any movement in the reported_by_os:gpu_memory on n5. Is there another category that would capture this?
Hm nope, you are looking into the right category.
Essentially any reported_by_chrome:* entry should ultimately be backed either by reported_by_os:system_memory:proportional_resident_size or reported_by_os:gpu_memory:proportional_resident_size

The fact that gpu memory doesn't go down (even if cc says we are freeing it on the chrome side) is a bit suspicious.
Maybe yet another case where the GPU driver keeps things around? Is there anything we can do on our side to give a kick to the gpu driver?

benhenry: would you be able to flash an Android One to sprout_svelte-userdebug @ MRA59G and test locally #417163 and #417164

From the chromium.perf builder log the two build artifacts should be:
https://storage.cloud.google.com/chrome-perf/Android Builder/full-build-linux_124dce0551b8f8e248ba80b0fe99ca963793fbb6.zip
https://storage.cloud.google.com/chrome-perf/Android Builder/full-build-linux_1a4f3af8e0b96d1517939be11395132f9faa5e8c.zip

The repro command should be:
src/tools/perf/run_benchmark -v --browser=android-chromium --output-format=html --output-format=json --pageset-repeat=1 --also-run-disabled-tests --story-filter=http.m.intl.taobao.com.group.purchase.html memory.top_10_mobile

Which should give us both the html report and the trace files in few minutes

Full instructions with more context (mind that the test name is different there)
https://sites.google.com/a/google.com/chrome-speed-infra/home/memory-infra/memory-health-plan
> Is Ashmem counted as part of PSS? 
Yes, but is memory that the kernel can drop (once unlocked) in case of pressure

> Do we/should we flush Ashmem before measuring memory use?
IIRC, if unlocked, the discardable system should purge it upon a pressure (That we simulate). Not sure if memory-coordinator changed this though.


How can I get my hands on an Android One?

Actually - I probably need a handful of different devices to test on, so I'll start looking into that.
Note that for this specific patch, we do *lock* the discardable memory used for these images. We unlock it in certain conditions, but as long as the user is scrolling, some amount of the images remain locked to aid in faster raster if the content gets close enough to the viewport.
Re #16 right but if we can prove that gpu memory went down to the release council still this wouldn't be any worse than the previous situation. The only bad thing would be of we lock this Discardable memory *and* keep the gpu mem around for some mysterious reason. 
P. S.  iirc the benchmarks track also locked Discardable memory
Looks like there hasn't been an update in 8 months. Should this be WontFix-ed?

Comment 19 by enne@chromium.org, Aug 16 2017

Status: WontFix (was: Assigned)
Yes, I think this should be wontfixed.

Sign in to add a comment