New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 762291 link

Starred by 1 user

Issue metadata

Status: WontFix
Owner:
Closed: Oct 2017
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

2.8% regression in system_health.memory_mobile at 499231:499384

Project Member Reported by briander...@chromium.org, Sep 5 2017

Issue description

See the link to graphs below.
 
All graphs for this bug:
  https://chromeperf.appspot.com/group_report?bug_id=762291

(For debugging:) Original alerts at time of bug-filing:
  https://chromeperf.appspot.com/group_report?sid=286a0df65a838715e4f6ac5026aebc52bdb3e9dfe5abb50333bd1f82c621cc14


Bot(s) for this bug's original alert(s):

android-webview-nexus5X
Cc: bmeu...@chromium.org
Owner: bmeu...@chromium.org
Status: Assigned (was: Untriaged)

=== Auto-CCing suspected CL author bmeurer@chromium.org ===

Hi bmeurer@chromium.org, the bisect results pointed to your CL, please take a look at the
results.


=== BISECT JOB RESULTS ===
Perf regression found with culprit

Suspected Commit
  Author : Benedikt Meurer
  Commit : f1ec44e2f525434b7af60450aa7678d12b519aee
  Date   : Fri Sep 01 11:27:37 2017
  Subject: [turbofan] Optimize fast enum cache driven for..in.

Bisect Details
  Configuration: android_webview_arm64_aosp_perf_bisect
  Benchmark    : system_health.memory_mobile
  Metric       : memory:webview:all_processes:reported_by_chrome:v8:effective_size_avg/load_news/load_news_nytimes
  Change       : 2.89% | 18781838.6667 -> 19325061.3333

Revision                           Result                   N
chromium@499230                    18781839 +- 183578       6      good
chromium@499307                    18743255 +- 118357       6      good
chromium@499346                    18765879 +- 173177       6      good
chromium@499365                    18789447 +- 147755       6      good
chromium@499368                    18759008 +- 167432       6      good
chromium@499368,v8@d667bf4afc      18737113 +- 72364.4      6      good
chromium@499368,v8@c77bb611e3      18783596 +- 286449       6      good
chromium@499368,v8@6d249930b3      18818571 +- 144447       6      good
chromium@499368,v8@f1ec44e2f5      19338759 +- 182985       6      bad       <--
chromium@499369                    19315979 +- 162018       6      bad
chromium@499370                    19325533 +- 212340       6      bad
chromium@499375                    19321387 +- 291744       6      bad
chromium@499384                    19325061 +- 231433       6      bad

Please refer to the following doc on diagnosing memory regressions:
  https://chromium.googlesource.com/chromium/src/+/master/docs/memory-infra/memory_benchmarks.md

To Run This Test
  src/tools/perf/run_benchmark -v --browser=android-webview --output-format=chartjson --upload-results --pageset-repeat=1 --also-run-disabled-tests --story-filter=load.news.nytimes system_health.memory_mobile

More information on addressing performance regressions:
  http://g.co/ChromePerformanceRegressions

Debug information about this bisect:
  https://chromeperf.appspot.com/buildbucket_job_status/8969252475540379472


For feedback, file a bug with component Speed>Bisection
Cc: petermarshall@chromium.org jgruber@chromium.org
Memory sherrifs jgruber@, petermarshall@: Can you please explain to me what 'effective_size' includes here? Is this malloc'ed memory by v8 or just overall memory (including binary size)? 
You can drill down into the spaces: https://chromeperf.appspot.com/report?sid=f627bffefe8419e8721b31d5fb2df5d9b41af93b6981961d91184e34fdb673cc&start_rev=489456&end_rev=499785

In this case, it looks like 1 addtl large object space page.
Thanks Jakob. That doesn't really make sense; I've been staring at the diff for a while now and the only thing I can think of that would be related to LOS is change in GC timing somewhere.
Haven't looked in detail, but we've run into LOS things before related to immovable code objects. As far as I remember: when we alloc immovable code objects but the first code-space page is filled, we alloc in LOS instead. 

Depending on how we measure, that could show up as 512K per such object. 

It can even happen if the newly added code objects (by the current CL) aren't immovable by themselves, since they push other immovables out of the first page.

Can't seem to find the bug from back then atm.. Anyway, just a guess. 
Oh and that should be fixed for snapshot builds.
Interesting. What do you mean by "should be fixed for snapshot builds"? Isn't telemtry using snapshot builds?
See https://cs.chromium.org/chromium/src/v8/src/heap/heap.cc?l=3604&rcl=8cd4009c5b7072ad224f19a9e668ec0ed7430599.

When creating the snapshot, we simply set the chunk as immovable (pages will be trimmed and marked as immovable on deserialization anyway.

And yes, this might be irrelevant to this particular case. I didn't look at it in detail. The additional LOS page just looks suspicious. 
At least it doesn't look related to my CL in particular. Maybe it's just that my CL pushes it over the limit or something like that.
Cc: u...@chromium.org
+ulan: any ideas what we should do with this bug?
Status: WontFix (was: Assigned)
I think it's fine to close this. The graphs have since recovered (Point ID 507401) and bmeurer@ stated above that his CL is unrelated to LOS. Feel free to reopen if something comes up.

Sign in to add a comment