New issue
Advanced search Search tips

Issue 800014 link

Starred by 1 user

Issue metadata

Status: WontFix
Owner: ----
Closed: Jan 2018
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

24%-179.8% regression in loading.desktop at 527042:527187

Project Member Reported by benjhayden@chromium.org, Jan 8 2018

Issue description

See the link to graphs below.
 
All graphs for this bug:
  https://chromeperf.appspot.com/group_report?bug_id=800014

(For debugging:) Original alerts at time of bug-filing:
  https://chromeperf.appspot.com/group_report?sid=7dc36a947da5aa5af4c6673562253a11880ec721efad0face4ef36874af25ccc


Bot(s) for this bug's original alert(s):

chromium-rel-win7-gpu-ati
chromium-rel-win7-x64-dual
📍 Couldn't reproduce a difference.
https://pinpoint-dot-chromeperf.appspot.com/job/12e091db040000
📍 Couldn't reproduce a difference.
https://pinpoint-dot-chromeperf.appspot.com/job/14fbc467040000
Cc: tdres...@chromium.org kouhei@chromium.org
tdresser, kouhei: Any interest in digging into this regression?
The dashboard charts look pretty clear to me, but it seems that cpuTimeToFirstMeaningfulPaint and timeToFirstContentfulPaint might be too noisy for Pinpoint to bisect. If you open the pinpoint link in #5 and click the dots in the graph, you'll see the histograms look bimodal.

Why don't we see the histograms for the pinpoint run in #3?

Just making sure I understand what's going on here: pinpoint is reproducing the regression, but believes it isn't statistically significant due to the spread in the variance in the metric?
Cc: dtu@chromium.org
I see bimodal histograms when I click the points in the charts in both #2-3 and #4-5.

I'm not sure that pinpoint is reproducing the regression. The dashboard timeseries for warm/Kakaku jumped from 250ms to 286ms, but pinpoint reports averages from 251 to 254. The dashboard timeseries for cold/Taobao jumped from 20 to 51, but pinpoint reports a decline from 22.5 to 20ms.
The dashboard charts look clear enough to me that I don't expect broadening the revision range to help pinpoint find the regression.

+dtu: Do you suspect another test environment change here?

Cc: dproy@chromium.org
In the ~20 case, we're reporting FMP for a single url, but in the ~52 case, we're reporting for 2 urls.

We then proceed to take the average.

The page loads look completely different in the two cases. Is the page load just high non-deterministic?
From the traces, it looks like we actually navigated one additional time in one trace than the other.

https://chromeperf.appspot.com/report?sid=873977ae27369ae4efd5785721aef15a16987b24e8b539b0aa208c4fb9a3d92b


Is there any way in the tooling that we can yell if the number of histogram entries changes?
Enable the count statistic in the metric and alert on *_count?
That sounds good to me, assuming we can communicate what's going on adequately clearly to sheriffs.

We would probably want to apply this to all loading metrics.

It isn't clear to me if it's worth the time investment - have others seen examples of this happening before?

Otherwise we should maybe just keep this in mind as something to do if this becomes more common.
Status: WontFix (was: Untriaged)
I'm wontfix-ing this bug since it's not a performance regression, but looks similar to https://github.com/catapult-project/catapult/issues/4197

Sign in to add a comment