Issue metadata
Sign in to add a comment
|
1.8%-4.3% regression in memory.top_10_mobile at 511425:511580 |
||||||||||||||||||||
Issue descriptionSee the link to graphs below.
,
Oct 30 2017
Started bisect job https://chromeperf.appspot.com/buildbucket_job_status/8964269520757834512
,
Oct 31 2017
=== BISECT JOB RESULTS === NO Perf regression found Bisect Details Configuration: android_webview_arm64_aosp_perf_bisect Benchmark : memory.top_10_mobile Metric : memory:webview:all_processes:reported_by_chrome:v8:effective_size_avg/background/after_http_yandex_ru_touchsearch_text_science Revision Result N chromium@511528 7994061 +- 907561 21 good chromium@511580 8026449 +- 784422 21 bad Please refer to the following doc on diagnosing memory regressions: https://chromium.googlesource.com/chromium/src/+/master/docs/memory-infra/memory_benchmarks.md To Run This Test src/tools/perf/run_benchmark -v --browser=android-webview --output-format=chartjson --upload-results --pageset-repeat=1 --also-run-disabled-tests --story-filter=http.yandex.ru.touchsearch.text.science memory.top_10_mobile More information on addressing performance regressions: http://g.co/ChromePerformanceRegressions Debug information about this bisect: https://chromeperf.appspot.com/buildbucket_job_status/8964269520757834512 For feedback, file a bug with component Speed>Bisection
,
Oct 31 2017
Started bisect job https://chromeperf.appspot.com/buildbucket_job_status/8964188722559564368
,
Nov 1 2017
=== Auto-CCing suspected CL author jkummerow@chromium.org === Hi jkummerow@chromium.org, the bisect results pointed to your CL, please take a look at the results. === BISECT JOB RESULTS === Perf regression found with culprit Suspected Commit Author : Jakob Kummerow Commit : 3424c28b134839071cbc747ca69634a4973ccf8c Date : Wed Oct 25 07:06:57 2017 Subject: KeyedStoreIC must immediately make prototypes fast Bisect Details Configuration: android_nexus5X_perf_bisect Benchmark : system_health.memory_mobile Metric : memory:chrome:all_processes:reported_by_chrome:v8:effective_size_avg/load_search/load_search_taobao Change : 0.41% | 3789229.33333 -> 3804726.66667 Revision Result N chromium@511424 3789229 +- 27995.6 9 good chromium@511439 3782266 +- 19132.7 6 good chromium@511446 3787788 +- 29163.6 9 good chromium@511448 3785200 +- 29979.1 9 good chromium@511448,v8@3424c28b13 3810092 +- 7941.86 6 bad <-- chromium@511448,v8@d8fbe426fe 3804787 +- 24449.0 9 bad chromium@511448,v8@dd0a37f202 3807859 +- 16219.3 6 bad chromium@511448,v8@7b2f48204d 3805204 +- 40066.7 14 bad chromium@511449 3809642 +- 10914.4 9 bad chromium@511450 3807466 +- 19942.3 9 bad chromium@511453 3807202 +- 11287.0 6 bad chromium@511482 3809937 +- 9587.3 6 bad chromium@511540 3804727 +- 17459.5 6 bad Please refer to the following doc on diagnosing memory regressions: https://chromium.googlesource.com/chromium/src/+/master/docs/memory-infra/memory_benchmarks.md To Run This Test src/tools/perf/run_benchmark -v --browser=android-chromium --output-format=chartjson --upload-results --pageset-repeat=1 --also-run-disabled-tests --story-filter=load.search.taobao system_health.memory_mobile More information on addressing performance regressions: http://g.co/ChromePerformanceRegressions Debug information about this bisect: https://chromeperf.appspot.com/buildbucket_job_status/8964188722559564368 For feedback, file a bug with component Speed>Bisection
,
Nov 1 2017
It's hard to believe that that CL would cause memory regressions. Either way, it's a one-liner fix, and not going to get reverted.
,
Nov 2 2017
Issue 780247 has been merged into this issue.
,
Nov 3 2017
Issue 781203 has been merged into this issue.
,
Nov 9 2017
It would be nice if we could get some proof that it isn't this CL. Can these benchmarks be run locally, or on tryjobs with and without your patch? If bisect sees these results, the regression is not insignificant and I'd really like to not ignore this.
,
Nov 9 2017
I've attached three snapshots of the v8 MDP from traces, from the timeline graph, from revisions: 511157, 511540, and 511833. According to this bug, we expect to see a 30k regression in effective_size on 511448. Between screenshots #1 and #2, we see a 50k regression in effective size, a 15k regression in allocated_objects_size, and a 500k regression in virtual_size. So we're sitting at the boundary of allocating another 500k block. jkummerow: Would this cause 15k regression in allocated_objects_size? Is this a GC timing issue? An issue with creating another 500k block?
,
Jan 11 2018
jkummerow: Ping on #10?
,
Mar 27 2018
Looks like the biggest delta (+18KB) is in code space. These days we only have optimized code in code space; so if the CL in #5 does increase the value there, that means it was successful in allowing more optimization. As I wrote in #6, it was a bug fix (for a performance issue). |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by 42576172...@developer.gserviceaccount.com
, Oct 30 2017