New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Pixel 2015 glacially slow with pinned tab throb animation in M50

Project Member Reported by creis@chromium.org, May 11 2016

Issue description

Version: 50.0.2661.91
OS: Chrome_ChromeOS

What steps will reproduce the problem?
(1) Restart Chrome with dozens of tabs across several windows.

In M49, my Pixel2 handled this session just fine, and it was quite responsive and usable a few minutes after restart.

Once I restarted into M50, the device stays at >100% CPU in the browser process apparently indefinitely, making it entirely unusable.  (It's difficult to type this because of the lag between typing and letters appearing.)

I'm guessing that M49 didn't attempt to restore all of my tabs on startup, and waited until I switched to them.  Now they all appear to get restored, judging by the task manager.

Is this a known regression in session restore?
 
Showing comments 17 - 116 of 116 Older
I think so, I'm not actually familiar with these flags, and found non-material in ui_base_switches.cc. But that sounds like what I am hoping for.

Comment 18 by creis@chromium.org, May 12 2016

Just tried "Non-material" out.  Doesn't seem to have reduced CPU usage yet, a few minutes after restore.  I'll let it idle over lunch, and if the CPU usage is still high, I'll kill all the renderer processes and see if it continues to be high (as in comment 1).
Another trace would be nice too if you can.
Cc: josa...@chromium.org
Can someone confirm what other devices than Pixel2 this is affecting? Is this happening all the time? after reboots?

Comment 22 by creis@chromium.org, May 12 2016

Same problem after waiting (after restarting into non-material mode), and after killing all renderer processes.  Browser process CPU is still >100%.  For what it's worth, there's probably more than 100 tabs.

I've attached another trace.

trace_m50-performance-trace-nomaterial.json.gz
5.8 MB Download
Owner: loyso@chromium.org
That trace has "only" about 1800 layers heh. So I guess it's something not-material-design related, but 1800 layers is really a lot.

I think I misread times earlier, but we're spending like 82ms wall clock time to push those 1800 layers, and something is animating them causing them to require a push and make us draw a new frame from the ui compositor constantly.

It's possible this is related to the animation system, really unclear. I guess there's 2 things to see:

1) Why are there so many layers
2) Why are they changing/pushing

loyso@ can you at least have a look at 2 if not 1 also?
Status: Assigned (was: Untriaged)
Labels: -Pri-1 -ReleaseBlock-Stable Pri-0
After discussion with abodenha@ and josafat@ bumping up the priority and making it non-RBS. This bug is already out on the current stable. Let us get to the bottom of this as soon as possible.
If it helps, I believe the flag is "--top-chrome-md=non-material", not "--non-material".

Comment 28 by piman@chromium.org, May 16 2016

1800 layers seems unreasonable. Are we leaking them somewhere? This would explain a lot.

Comment 29 by loyso@chromium.org, May 17 2016

Just changing ui layer properties may cause an implicit ui animation (for example, https://code.google.com/p/chromium/codesearch#chromium/src/ui/compositor/layer.cc&sq=package:chromium&q=Layer::SetOpacity&l=272). So it's very easy to abuse with 1800 layers.

It's very hard to leak ui::Layers with cc/ui animations - ui::Layer isn't even refcountable object.

And I'm totally unfamiliar with wm/ash level, which uses ui/compositor layers. Can we cc someone from wm/ash in this bug?
Cc: sadrul@chromium.org
Owner: sky@chromium.org

Comment 31 by plakal@google.com, May 25 2016

Are there any updates?

My Pixel 2 just got M50 on the stable channel and it's stuttering with the Browser process pegged at 100+%, even with the flag #top-chrome-md set to "Non-material".  There's another material flag I found and I wonder changing this from default to Fast or Slow will help?
"Material Design Ink Drop Animation Speed Chrome OS
Sets the speed of the experimental visual feedback animations for material design. #material-design-ink-drop-animation-speed"

I'm also wondering if switching to Beta or Dev channels will fix this, or do they also have this problem?
The #material-design-ink-drop-animation-speed flag is used for the Ink Drop, aka Material Design's visual feedback on buttons for hover and pressed states, animation duration.  This flag merely increases the animation duration by a factor of 3 and should not be bringing the CPU to a grinding halt.  Especially if no Ink Drop is active.

Also for the record the Default value is the same as the Fast value in M50.

Manoj, Can you break down the steps at how you arrived to the bad state?
e.g.
- Did you have many windows open?
- Did you have many tabs open within those windows?
- Did you just (re)boot or login?

Any definitive/concrete info you can provide would be helpful, thanks :)

The traces provided had thousands of ui layers in the browser, which should not happen, and sounds like a leak somehow.

Comment 34 by plakal@google.com, May 26 2016

So after my Pixel 2 updated to M50 on stable, it received another update on the stable channel late last night, and after I restarted, the problem seems to have gone away and the browser process is no longer pegging the CPU (even hours after restart).  My current version info:
Version 50.0.2661.104 (64-bit)
Platform 7978.76.0 (Official Build) stable-channel samus
Firmware Google_Samus.6300.174.0

Re the problem I was seeing earlier
- I do have a lot of windows and tabs, but I also use a Chrome extension called The Great Suspender which automatically suspends inactive tabs (replaces them with an image of the content and the page reloads when I click).  So almost all of my tabs are suspended and only the ones I'm using (maybe 10 at any given time) are actually rendered and use machine resources.  I've got around 15-20 windows with more than a 100 tabs across  these windows, but my memory usage (according to chrome://memory) stays comfortably around 4-5 GB and the system is very usable and responsive usually (given I have 16 GB of RAM).
- When I saw the problem, the CPU-pegging happened right after restart and login, and remained until I applied the second update and restarted again.

Based off of your version number could it be: https://bugs.chromium.org/p/chromium/issues/detail?id=606207

We saw this causing problems in M51 for people w/ certain extensions. The fix has landed, however. Just thought I'd toss it out in case it's helpful.
Project Member

Comment 36 by sheriffbot@chromium.org, May 29 2016

Pri-0 bugs are critical regressions or serious emergencies, and this bug has not been updated in three days. Could you please provide an update, or adjust the priority to a more appropriate level if applicable?

If a fix is in active development, please set the status to Started.

Thanks for your time! To disable nags, add the Disable-Nags label.

For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot

Comment 37 by plakal@google.com, Jun 5 2016

I spoke too soon, the problem has returned on my Pixel 2.  My version info:
Version 50.0.2661.104 (64-bit)
Platform 7978.76.0 (Official Build) stable-channel samus
Firmware Google_Samus.6300.174.0

The Browser process is pegging the CPU (consistently around 100 in the CPU column in the Task Manager) immediately after a restart, and persists for days after restart.

I have lots of tabs and windows (~20 windows and maybe 100-120 tabs) but only 7 tabs have actual content (show up as "Tab: <title>" in Task Manager), the others are all suspended via The Great Suspender.

Comment 38 by plakal@google.com, Jun 5 2016

Do you need me to capture a trace? I've seen chrome://tracing, what settings should I be using in the Record dialog to capture a useful trace for this particular bug?
Maybe a different chrome ui owner could take a look? Sky appears away.
Owner: pkasting@chromium.org

Comment 41 by sky@chromium.org, Jun 6 2016

Sorry, been busy.
Dana, you mention traces and lots of layers. Do you repro steps?
If not, creis, does the behavior you are seeing happening regardless of what sites you restore? To trigger do I have to restore a specific number of tabs/windows? Any info to help isolate would be helpful.

Comment 42 by sky@chromium.org, Jun 6 2016

On tip of tree I tried creating 2 windows with ~6 tabs each on a chromeos build. On restore I ended up with ~200 layers. No where near the 2000 reported.
Comment 42: That may be because you need an order of magnitude more tabs.  See comments 22 and 37 about having >100 tabs.  Renderer processes for them aren't necessary in either case (whether you kill them or skip them with an extension like the Great Suspender, per comment 37).

Comment 44 by plakal@google.com, Jun 7 2016

I tried capturing a trace using chrome://tracing and manual selection of all categories.

This is with the same configuration as in my comment#37. 7 active tabs that show up as "Tab: <title>" in the Task Manager, many more tabs and windows all suspended (show up as "Extension: <title>", ~300MB memory across all 100+ such suspended tabs).
trace_cros-m50stable-pixel2-7activetabs-cpupegged.json.gz
6.3 MB Download

Comment 45 by sky@chromium.org, Jun 7 2016

Owner: osh...@chromium.org
I tried creating ~90 tabs and ended up with ~550 layers. This is on my desktop linux box with a chromeos build. There may be a weird interaction going on with OOM killer, but it's hard to trigger on the desktop as oom isn't used.

Oshima, any chance you could try a dev build on an actual device to help isolate what might be going on?

Comment 46 by plakal@google.com, Jun 7 2016

I added a trace from my machine yesterday, is it useful to tell you more about what I'm seeing?

If you're going to try and reproduce on a Pixel 2, perhaps best to try with the current stable channel version. You mentioned 'dev build' and earlier 'tip of the tree', which lead me to believe that you aren't trying with the stable channel?

Comment 47 by plakal@google.com, Jun 7 2016

And for what it's worth, re OOM issues: I am not running out of memory. My Pixel 2 has 16 GB of RAM of which 11-12 GB seems free (when I checked last night by running top in the cros shell, and looking at chrome://memory).
From Chrome's standpoint, an OOM kill is not distinguishable from a SIGKILL.  If you can identify the renderer's pid, just do a kill -KILL <pid>.

Comment 49 by plakal@google.com, Jun 7 2016

Another data point: I use an Asus ChromeBox at work on which I run the same version of ChromeOS as my Pixel 2 (as reported in #37), with identical flags (I set material design to "non-material"), and a similar number of tabs (12 active tabs, 107 suspended tabs).

On my ChromeBox, the Browser process is not constantly pegging the CPU at 100+, it varies in the 30-70 range, and I don't see the occasional stuttering that I see on the Pixel 2.  I seem to have ~4 GB free out of 8 GB RAM.


See reproduction steps below.

In my testing tonight I have clearly observed:

1. The shimmer animation for a tab notification (e.g. when GMail or Hangouts has a new message) causes Browser's CPU use on my machine to go up by ~25% (of a single CPU core) even with few tabs.

2. The more tabs and windows I have open, the more CPU is used by the tab shimmer animation, up to about 100% of one CPU core, even if only a single tab has the shimmer animation.

3. With lots of tabs open but no shimmer activation, the CPU usage is at a low baseline (~5% of one core used by Browser).

I believe what I'm recounting here is the same root cause as what others have reported here, because the structure of my tracing results matches the structure of the tracing dumps in comment 1 and comment 22 (if you focus on the way in which the Browser process is using lots of CPU).


For those trying to reproduce this up to this point, if you have lots of tabs but no shimmer, you might not see any problem (unless there's another trigger besides the tab notification shimmer animation). I suspect this is why repro has been so hard, and why e.g. plakal saw the problem go away then come back. Also if you have shimmer but few tabs and/or a moderate number of tabs but a powerful machine (e.g. Pixel), you may not notice the performance issue even though it's lurking and wasting power (battery).

I'm running latest Stable channel (50.0.2661.103, Platform 7978.74.0). This is on a Toshiba Chromebook 2, with a 2-core Celeron and 4GB RAM. Performance issues are much more noticeable on this platform than I imagine they are on a Pixel. ;) And I almost always push this machine hard with lots of tabs.


To simplify, the testing described next was done after I closed most of my many tabs, shut down all extensions (except Secure Shell, which for some reason was needed in order for crosh to work), then rebooted my Chromebook.

I observed CPU usage by watching 'top' running in crosh, to avoid the high overhead of Task Manager. I used about:tracing, using only the 'cc' category (alone it shows the problem clearly).


Reproduction steps and observations:

0. Baseline: I have open 6 tabs (GMail, Hangouts, Calendar, Contacts, Drive, and Trello), crosh, and about:tracing, and nearly no extensions. At first with no shimmer animation, I get the Browser process using ~5% of one of my two CPU cores.

1. Baseline with shimmer: ~29% of one CPU core by Browser process. 49 calls to PictureLayer::PushPropertiesTo each cycle -- callees of this method are the major CPU users seen in tracing. Altogether, LayerTreeHost::PushProperties and callees use 2.88ms wallclock and 2.86ms CPU per cycle (averaged over about 90 cycles).

2. After adding 5 new windows, each with 10 blank tabs, and waiting for the startup CPU usage to settle down, I see Browser CPU at about 61%, 110 layers pushed each cycle, and each call to LTH::PP and callees taking 7.40ms wallclock and 7.13CPU on average.

It appears like each new empty tab causes one new layer push, and each new window causes two. This guess is supported by the next step as well.

3. After adding another 5 windows each with 10 blank tabs, and waiting to settle, I see Browser CPU at about 93%, 170 layers pushed each cycle, and each call to LTH::PP and callees taking 13.68ms wallclock and 13.59ms CPU on average.

Notice that going from steps 1 to 2 to 3, as I added load in step 2 and then doubled it in step 3, I saw nearly linear increases in CPU, layers pushed, and wallclock/CPU used in LTH::PP and callees.

Notice that I'm pushing to near 100% of one core in step 3, and approaching 1/60Hz for each cycle of LTH::PP.

4. Leaving all ~106 tabs and windows as-is, but killing the shimmer (open the shimmering tab, THEN switch to another tab -- only after switching away does the CPU use drop), Browser CPU goes back down to near baseline, and the tracing pattern observed (heavy calls to LTH::PP and descendents) goes away.


Looking at the tracing dumps attached in comment 1 and comment 22, I see the same structure as I see in my tracing runs, which reinforces my guess that I'm seeing the same root problem that they're seeing.

In all our tracing dumps, the major CPU and wallclock user is PictureLayer::PushPropertiesTo; I have no idea if there is a problem in there, or whether the problem really lies in its getting called too much. Or whatever.

Notice that in step 3, I'm approaching 100% use of one CPU core even though I have only ~1/10 of the 1200, 1800, or 2000 layers that others have seen pushed each cycle in this issue's comments and attachments. So I don't think the absolute number of layers being pushed is really the problem, per se. I think the problem is that regardless of how many layers are being pushed each cycle, way too much CPU is getting used every cycle by LTH::PP and its callees.


I hope this helps the devs reproduce and track down why 1) the tab shimmer animation triggers so much CPU, and 2) CPU usage triggered by shimmer scales with # of empty tabs opened.

I am by no means a Chromium dev, so if I've taken my strong-seeming test results and made some incorrect guesses about what's going on, please excuse me.

Comment 51 by plakal@google.com, Jun 8 2016

I think I'm on board with David's theory about tab shimmer being the culprit.

I tried leaving Gmail open on the Inbox view (on my Pixel 2 with same config as in my earlier messages) and switched to another tab and waited for emails to show up. When that happened and the Gmail tab started shimmering, CPU jumped from ~25-35 to 90-100 and stayed there.  When I switched to Gmail and then switched away, the shimmer went away and so did the CPU usage. While experimenting with this, I also discovered a couple of other tabs that had been shimmering for a while and which might have been pegging my CPU all along.

Is there some setting or flag to disable tab shimmer?
Owner: danakj@chromium.org
It sounds like this is similar to the issue with tab spinner that danakj@ worked on before. Dana, can you look into this?

Comment 53 by sky@chromium.org, Jun 8 2016

Summary: Pixel 2015 glacially slow with pinned tab throb animation in M50 (was: Pixel 2015 glacially slow after restarting large session into M50)
By shimmer you mean the pinned tab throb animation. There is a bug on changing to a badge rather than animating here: 473898 . I'm going to propose we disable the animation on chromeos until we get new assets.

Peter has been changing the tab painting code. I'm not sure if he touched the throb animation and that some how made things worse. I would imagine if the tab painting code is doing something bad the traces would show that.

I'm also updating the summary to reflect the cause.

Comment 54 by sky@chromium.org, Jun 8 2016

Blockedon: 473898
I'm marking this blocked on 473898, but it's also possible that a bug in cc and/or tab painting code is the cause of the regression as well.

Comment 55 by plakal@google.com, Jun 8 2016

Is the animation specific to pinned tabs? So unpinning could be an easy workaround?
I haven't changed the throbber animation.
Looks like the throb animation is indeed specific to pinned tabs. Just ran
the experiment, and when an unpinned tab has a notification, its label
switches between two text states rather than doing a throb, and the CPU use
is at baseline. So unpinning the affected tabs looks like a workaround
until the underlying issues are addressed.

Comment 58 by sky@chromium.org, Jun 8 2016

Yes, this animation only happens for pinned tabs whose title changes and are not selected.
Do we not get 2000 layers when tabs are not pinned? Is the animation somehow leaking the layers? The traces showed the time was being spent in processing thousands of layers in the ui tree.
@#52: I don't think this is similar to the tab spinner at all, that was a case of raster taking time. The traces showed we had thousands of layers and we're spending the time pushing properties on them all. It looked like something creating but not deleting layers in the ui layer tree to me. See https://bugs.chromium.org/p/chromium/issues/detail?id=611127#c23

Comment 61 by sky@chromium.org, Jun 9 2016

AFAIK the tab code only uses a layer when loading. The pinned tab throb animation is separate from the loading spinner.
I am concerned we are not addressing the original bug. Maybe the shimmer animation also uses a lot of CPU. The original bug had traces showing us taking many many milliseconds processing thousands of ui layers.

sky, did you get an idea where all the many layers were coming from on restart? Are there a lot more after session restore than originally?
In response to danakj's comment #59:

In my testing as I added new, no-URL tabs, traces showed a proportional
increase in the numbers of calls per cycle to PictureLayer::PushPropertiesTo
(only while the throb animation was active). When I deleted those new empty
tabs, the #calls per cycle to PL::PPT went back to baseline.

So naively, it doesn't seem like the animation is leaking layers. Instead
it seems like the animation is somehow triggering calls to PL::PPT for
layers associated with the new tabs (again very naively). But these
continuing CPU-burning calls to PL::PPT don't happen when there's no active
throb animation.

For details see
https://bugs.chromium.org/p/chromium/issues/detail?id=611127#c50 and let me
know if it'd be useful for me to attach traces from a specific test case.

Also note that when my test reached 93% of one CPU core being used by
Browser, I only saw 170 calls to PL::PPT per cycle, not the thousands that
others saw. So the issue can show up even with fewer calls per cycle to
PL::PPT.
See my comment 50; sorry it's long, but the details are in there.

The CPU use only happens in my testing with throb active. But the degree of
CPU burning (and number of layers being updated per cycle) increase
proportionally as I add new, empty tabs.

So the issue I see, which appears to match traces others have attached
here, involves both throb animation and number of tabs. Perhaps I should
attach a trace to show what I'm seeing; let me work on that now.

I too initially thought I saw a correlation with restart, but then I
noticed it was instead correlated with pinned tab throb animation. I too
noticed a distinct uptick in CPU use going from M49->M50, but I now only
see that when throb animation is active.
Oh ok thanks! Sorry I skimmed that comment, didn't realize it was adding more data. *^^*.

Comment 66 by sky@chromium.org, Jun 9 2016

caveat: I was trying a tip of tree chrome for chromeos build and not on a device and was testing restore. In that environment I saw a direct correlation between the number of tabs and the number of layers, but no where near the number reported early (I tried creating ~90 tabs and ended up with ~550 layers) vs ~2000 as reported earlier.
It is interesting that the animation is causing push properties time to dominate instead of raster. Is the animation based on opacity/transform/filters or something instead of on invalidation/painting?

I guess there's a few choices here and it's up to someone (sky?) to decide:

1. The animations going forever means the CPU time never decreases, so make the animations end and call this fixed. I think if that's the outcome we dupe this into 473898.

2. The animations are consuming too much power even while they are active. It looked like that time is in pushing property changes. Can the animation be squashed into fewer layers? Are there properties being set on more layers than need to be? Animating a few tabs wouldn't need to set properties on 4000 layers like I'm seeing in the trace from https://bugs.chromium.org/p/chromium/issues/detail?id=611127#c1. Then someone needs to look at why the animation is changing so many views/layers. Who's an appropriate owner for that?
#67: it seems expensive that having *one* tab animate is causing *all* layers on *all* tabs to be updated? Is the scope of the property push too broad?

Comment 69 by sky@chromium.org, Jun 9 2016

Latest from Sebastien is to go with just a badge, which will take care of it. That said, the animation hasn't really changed in a while, so I'm not sure why it's all of a sudden much worse. Nor can I explain the 4000 layers. In other words it seems like something else changed to make the animation that much worse.

Comment 70 by sky@chromium.org, Jun 9 2016

I created a m50 build that logs when layers are created/destroyed, created a window with a couple of tabs, pinned one on a site whose title constantly changes and did not notice the layer count increase.
Peanut gallery: Please do keep digging on the thousands of layers issue even after we change the animation to a badge.  There are lots of other UI animations, including tabstrip animations, and many got worse in MD.  I am concerned that the root cause here will bite us elsewhere.
I've done further tracing experiments tonight while running a tab that
updates its title using this bookmarklet:

javascript: var i=0; setInterval (function () {document.title =
(i++).toString()}, 300)

I then can easily pin and unpin this title-changing tab and see what
happens in tracing. When pinned, it throbs and drives the high CPU we've
seen, and when it's unpinned, it changes its title every 300 ms (at first,
then appears to get throttled eventually?...) and I can see its activity
each title-change in tracing. With this simple bookmarklet I can keep
everything else minimal to greatly reduce noise in tracing -- don't have to
run GMail, Hangouts, or whatever to get the title changes and throb.


I see a structure of calls in tracing I'll call a "drawing cycle", though I
don't know what the proper terminology is. When the title-changing tab is
pinned, this drawing cycle occurs about 24 times a second, corresponding to
the throb animation I suppose. When the tab is unpinned, each time the
title changes there are two drawing cycles in quick succession along with
some other activity in Browser.

So part of the reason we have high CPU use, which will be solved by the
change to a badge, is that we're doing a "drawing cycle" possibly dozens of
times more with a tab throb than without.

But there is more, as pointed out (I think) most recently by pkasting: The
more tabs (or other objects?) you have, the more work is being done on each
"drawing cycle", even though those other tabs have nothing at all to do
with e.g. the throb animation or tab title change in the single
title-changing tab.


Looking closely at a "drawing cycle", I see the following calls increase in
count per drawing cycle as I add about:blank tabs (which are doing nothing
and are uninvolved in the throb or title change in a single other tab):

Phase 1 (under LayerTreeHost::DoUpdateLayers):
   A) PictureLayer::Update
Phase 2 (under LayerTreeHostImpl::BeginCommit):
   B) ResourceProvider::DeleteResourceInternal
Phase 3 (under LayerTreeHost::PushProperties):
   C) PictureLayer::PushPropertiesTo
   C) Layer::PushPropertiesTo
   C) Layer::PushPropertiesTo::CopyOutputRequests

All three (C) methods have the same number of calls in all cases I've
observed.

With a minimal tab etc. setup, I see the following numbers of calls per
"drawing cycle", regardless whether there is a throb by a pinned tab, or an
occasional title update for an unpinned tab:

A) 334 calls per cycle
B) 160
C) 49

When I add a new window, with 10 about:blank tabs, the numbers change to
the following (but are again the same per drawing cycle for either throb or
occasional unpinned title update):

A) 415
B) 178
C) 62

I deleted the 10-tab window and the numbers went back exactly to the first
set above. Then I recreated that 10-tab window, and the numbers went back
up to the second set.

So these "layer counts", if that's what these are, are:

1) stable whether the title-changing tab is pinned (throbbing) or unpinned
2) stable when you add tabs and remove them
3) increase as you add tabs


So the story I'm seeing is: The throb animation is a trigger for high CPU
use (because it drives 24 "drawing cycle" per second, if it can, and each
cycle uses quite a bit of CPU), but it's not the only problem. The other
part of the problem is that the more tabs (or other objects?) you have, the
more work is done per "drawing cycle", and as pkasting noted, this seems
likely to bite in other areas even if tab throb is turned into a badge.
This issue is not only triggered by the pinned-tab throb. Testing now shows
me that the original issue description by creis (reinforced by plakal) is
essentially correct. Upon session restore, there is persistent CPU use by
Browser, and the more windows and tabs that are restored, the higher the
CPU use.

I've confirmed this with about:blank tabs, so no content is required, just
windows and tabs.

I've also seen that in this high-Browser-CPU state, if you simply alt-tab
through all the windows (without stopping on any of them, just transiently
viewing them as you alt-tab), the CPU use goes back down to a low baseline
(at least in the case of nearly-silent about:blank tabs).

Tracing shows that the Browser CPU use pattern for the session-restart case
is broadly identical to the pinned-tab-throb case. There are many CC commit
cycles per second, with each cycle taking longer the more tabs you have
(because of more layers to process and push).

In either case, I've seen that a commit cycle is associated with a single
one of the many calls to PictureLayer::Update doing a paint. In the case of
the pinned-tab-throb, the paint is of the throbbing tab's window tab-bar.

In the case of a restart, the paint is of a LocationBarView for (it
appears) a single one of the not-yet-viewed windows (e.g. those not yet
alt-tabbed to). There is one commit cycle per half-second per
not-yet-viewed window, with a different PictureLayer painting its
LocationBarView in each cycle. If you have one window, you have a single
commit cycle per half-second. If you have five windows, you have five
commits per half-second. As you add more and more windows to restore,
eventually the commits take up all available CrBrowserMain time, and make
Browser use near 100% of a CPU core just doing commits.


Issue resolution might involve keeping a look out for other cases that
trigger continual rapid-fire commits, plus seeing if there's some way to
lighten the CC commit generally.


I have never yet seen more than ~200 layers getting pushed (although I've
seen roughly 1000 calls per commit to PictureLayer::Update as the commit
prepares for the push), so I cannot comment on the 1200-2000 layers being
pushed as seen by creis and plakal.


Trace dumps or detailed repro upon request; let me know what would be most
useful.
I think the way to debug this is to look at the stacktrace of calls to Layer::SetNeedsPushProperties, to understand where they are coming from.
I'd love to help by capturing stacktraces, but at this point I don't think
I will. If someone else can do it, that'd probably be best. Let me know if
I can help.

My starting point is my everyday-use Stable-channel Chromebook in
non-developer-mode. From a bunch of research today, it looks like it'd take
a bunch of uncertain steps and trial-and-error for me to get to the point
of capturing stacktraces, and I don't expect to have time for that kind of
effort. But if someone can provide good pointers for how to get to
stacktraces from my starting point, I'm willing to give it a shot.

I considered trying to repro the issue on Windows and try getting
stacktraces there, but my old Windows machine is not cooperating. Wish I
had a Linux machine I could try a repro on...
Project Member

Comment 76 by sheriffbot@chromium.org, Jun 19 2016

Pri-0 bugs are critical regressions or serious emergencies, and this bug has not been updated in three days. Could you please provide an update, or adjust the priority to a more appropriate level if applicable?

If a fix is in active development, please set the status to Started.

Thanks for your time! To disable nags, add the Disable-Nags label.

For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot

Comment 77 by plakal@google.com, Jun 20 2016


For what it's worth, not having pinned title-changing tabs seems to have fixed all of my high-CPU-usage issues with stable M50 on my Pixel 2.

Comment 78 by sky@chromium.org, Jun 23 2016

I can confirm seeing ~1000 Layer::SetNeedsPushProperties per second with three pinned tabs throbbing. I'm trying to narrow down where they are coming from.
Today after adding the Tab Suspender extension, I see ~1900 calls to
PictureLayer::PushPropertiesTo per commit, very roughly 5-10x more than
I've ever seen before. At the time of observation, I had about 15 tabs
active and 30 tabs suspended (they're all enumerated in Task Manager within
the Tab Suspender extension's process). Tab Suspender has similar
functionality to the Great Suspender extension mentioned by plakal in
comment 34 https://bugs.chromium.org/p/chromium/issues/detail?id=611127#c34
and other comments.

In this state, keyboard input to write this comment is, yes, glacially
slow. Browser is using ~110% of a CPU core (and that's viewed in 'top'
rather than Task Manager, so negligible measurement overhead).

I wonder if creis was also using a tab-suspending extension when there were
~2000 layers being pushed?

It appears that tab suspenders are doing something to cause lots of layer
pushes; I wonder what? The Great Suspender is in github, so could be
examined.

I think there may be three high-Browser-CPU issues being tracked in this
one crbug issue:
1) pinned tab animation
2) suspended tabs
3) session restore

As I've commented, I've seen high CPU in each of of these situations,
apparently independent from one another.

Should more than one issue be used to track them?

Comment 80 by creis@chromium.org, Jun 23 2016

Comment 79: I'm not using a tab suspending extension.  I do have 3 pinned tabs that frequently animate (Gmail, Calendar, and a docs tab), and I was seeing this on session restore.

Comment 81 by plakal@google.com, Jun 23 2016

And I've been using The Great Suspender for more than a couple of years with lots of tabs, and this is the first time I've noticed high CPU usage. I haven't had problems after working around pinned throbbing tabs.

Comment 82 by sky@chromium.org, Jun 24 2016

Cc: jbau...@chromium.org
I don't know for sure if this is the cause of the slow down, but I see a ton of layers with damaged rects calling SetNeedsPushProperties on every tick of the throbber with m50. I think there was a bug in how ui::Layer clears its damage regions in m50 (and maybe earlier). Here's the output I see on m50 for every tick of the throb animation:

[13072:13072:0623/164703:WARNING:layer.cc(684)] Layer::SchedulePaint 0x290ac64e16a0 w=55 h=29
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651f920 cc_layer=0x290ac69e1c20 iter=0 1366 768
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac69e1c20 size=1125,768
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651c7a0 cc_layer=0x290ac69e2420 iter=0 1366 768
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac69e2420 size=1125,768
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651d1a0 cc_layer=0x290ac69e2820 iter=0 1366 768
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac69e2820 size=1125,768
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64e11a0 cc_layer=0x290ac6a6f820 iter=0 1366 768
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a6f820 size=1125,768
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac8b96f20 cc_layer=0x290ac9523420 iter=0 884 472
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac9523420 size=884,472
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64e0a20 cc_layer=0x290ac6a6fc20 iter=0 1366 768
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a6fc20 size=1125,768
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64e16a0 cc_layer=0x290ac836dc20 iter=0 55 29
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac836dc20 size=884,624
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac68301a0 cc_layer=0x290ac837f820 iter=0 884 551
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac837f820 size=884,512
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64e07a0 cc_layer=0x290ac9da2020 iter=0 884 551
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac9da2020 size=884,512
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac68247a0 cc_layer=0x290ac8869c20 iter=0 20 26
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8869c20 size=20,26
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac86d52a0 cc_layer=0x290ac8917420 iter=0 8 8
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8917420 size=8,8
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6824020 cc_layer=0x290ac8917c20 iter=0 28 26
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8917c20 size=28,26
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac8731520 cc_layer=0x290ac8918420 iter=0 16 16
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8918420 size=16,16
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac874c7a0 cc_layer=0x290ac8918c20 iter=0 28 26
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8918c20 size=28,26
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac682cca0 cc_layer=0x290ac8919420 iter=0 16 16
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8919420 size=16,16
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac68242a0 cc_layer=0x290ac8919c20 iter=0 28 26
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8919c20 size=28,26
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac874c520 cc_layer=0x290ac891a420 iter=0 16 16
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac891a420 size=16,16
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac91b6520 cc_layer=0x290ac9525c20 iter=0 16 16
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac9525c20 size=16,16
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac98f6ba0 cc_layer=0x290ac8ebc020 iter=0 16 16
[13072:13072:0623/164703:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac8ebc020 size=0,0
[13072:13072:0623/164703:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac91b6f20 cc_layer=0x290ac93ad020 iter=0 294 22
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac93ad020 size=294,22
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64e0020 cc_layer=0x290ac6a70020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a70020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac62faa20 cc_layer=0x290ac6a70420 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a70420 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac62fb920 cc_layer=0x290ac6a70820 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a70820 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6830420 cc_layer=0x290ac836d820 iter=0 44 47
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac836d820 size=44,47
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac62992a0 cc_layer=0x290ac6a70c20 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a70c20 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac5f8a020 cc_layer=0x290ac668b020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668b020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6833420 cc_layer=0x290ac668b420 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668b420 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651c020 cc_layer=0x290ac668b820 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668b820 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651dba0 cc_layer=0x290ac69e2c20 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac69e2c20 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651eca0 cc_layer=0x290ac6a6f020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a6f020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6420420 cc_layer=0x290ac668bc20 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668bc20 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac641fa20 cc_layer=0x290ac668c020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668c020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651fba0 cc_layer=0x290ac6a6f420 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a6f420 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac641f020 cc_layer=0x290ac668c420 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668c420 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac65232a0 cc_layer=0x290ac6b67420 iter=0 2 32
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6b67420 size=1,32
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac62fb420 cc_layer=0x290ac668c820 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668c820 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac641dca0 cc_layer=0x290ac668cc20 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac668cc20 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac641d2a0 cc_layer=0x290ac6a44020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a44020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac651ef20 cc_layer=0x290ac6a44420 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a44420 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac64d3f20 cc_layer=0x290ac6a44820 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a44820 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6354920 cc_layer=0x290ac6a44c20 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a44c20 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(705)] Layer::SendDamagedRects 0x290ac6353f20 cc_layer=0x290ac6a45020 iter=0 1366 768
[13072:13072:0623/164704:WARNING:layer.cc(234)] Layer::SetNeedsPushProperties=0x290ac6a45020 size=1125,768
[13072:13072:0623/164704:WARNING:layer.cc(716)] Layer::ClearDamagedRection 0x290ac64e16a0

That SchedulePaint is to update the pinned tab.

As you can see, a ton of layers have a non-empty damaged_region_ and continually try to update. ClearDamagedRection() is *only* called for one layer, leaving the rest with non-empty damaged_region_ that is pushed on the next tick.

On tip of tree I get:
[12526:12526:0623/164144:WARNING:layer.cc(674)] Layer::SchedulePaint 0x10ef925776a0 w=55 h=29
[12526:12526:0623/164144:WARNING:layer.cc(695)] Layer::SendDamagedRects 0x10ef925776a0 cc_layer=0x10ef92781020 iter=0 55 29
[12526:12526:0623/164144:WARNING:layer.cc(200)] pushing, layer=0x10ef92781020 size=884,624
[12526:12526:0623/164144:WARNING:layer.cc(705)] Layer::ClearDamagedRection 0x10ef925776a0
[12526:12526:0623/164144:WARNING:layer.cc(200)] pushing, layer=0x10ef92781020 size=884,624

I think John fixed this here: https://codereview.chromium.org/2018223002 . As I said, I'm not sure if this is the cause, but this is why we see a ton of calls to cc::Layer::SetNeedsPushProperties on every tick of the pinned tab throb animation. Also, I'm not sure what would have changed in M50.

M51 is out now, are you still seeing bad performance there Charlie?
In response to comments 79, 80, and 81: Thanks for your responses creis & plakal. I now agree that tab suspension is not the issue. For me today, the Great Suspender does not exhibit the issue, so it seems an issue only in the Tab Suspender extension, which I tried first.

In comment 73, I did see session-restore cause high Browser CPU, which was worked around by alt-tabbing among all windows. But I'm not sure anyone else is noticing this. And I wouldn't be surprised if sky's findings in comment 82 in the pinned-throb case apply to this case too, given that the traces in the two cases looked very similar to my non-expert eyes.

Comment 84 by creis@chromium.org, Jun 24 2016

Comment 82: I just restored a session into 51.0.2704.103, and I agree it feels more responsive.  The CPU usage was around 70% (still not great, but input is much less affected), and it dropped to around 40-50% when I stopped the 3 pinned tabs from throbbing (by clicking on each of them).

Comment 85 by sky@chromium.org, Jun 25 2016

creis: 40-50% seems way too high. Do you happen to remember what it was like before 50?

Comment 86 by creis@chromium.org, Jun 27 2016

Comment 85: Agreed that 40-50 still seems high, though it has less visible impact on performance.  I don't remember what it was in previous versions, unfortunately.  I started actively paying attention when the performance degraded.  

Comment 87 by plakal@google.com, Jun 27 2016


I also see steady-state CPU usage of 30-40% in Task Manager for the Browser process, while the system as a whole is very responsive even with a large number of tabs on my Pixel 2.

But please note that Task Manager itself seems to be a very expensive way of measuring usage (perhaps due to frequent rendering of its window?). If I open up a ChromeOS shell (Ctrl+Alt+T) while otherwise idle, and run 'top', I see much lower numbers: overall CPU usage of the whole (quad-core) machine is 1-2% and the highest CPU usage of any single Chrome process is ~5%.  

Comment 88 by creis@chromium.org, Jun 27 2016

Comment 86: FWIW, I see 45-50% CPU in top as well as the task manager, with not much difference between the two.  CC'ing afakhry@ for task manager CPU usage on ChromeOS, though, since I know a lot of work went into making it less expensive.
Right now for me, top shows Browser CPU at 2-5% (of a single core). If I
start up TM and leave top running, both TM and top display Browser usage
around 15%.

This is with 6 windows, 14 live tabs, 6 suspended tabs (Great Suspender).
If I then open five new windows each with ten about:blank tabs, Browser CPU
use with TM running goes up from 15% to 23%. When TM is running, top shows
the same low Browser CPU as before.

My conclusion: TM is heavy and it scales with number of windows/tabs. I'm a
little surprised its impact is smaller for me than for the preceding three
comments, since I have a low-powered Chromebook. Quite possibly they have a
lot more windows/tabs than I do?

I think TM heaviness is essentially orthogonal from this crbug issue,
though it does interfere when trying to take eyeball performance measures
for this issue, so I'd recommend using top or something else light.

FWIW, to discern which process is Browser, I use the 'c' keyboard shortcut
in top to show the command arguments.
Labels: Merge-Request-52 Merge-Request-51
It would probably make sense to merge  https://codereview.chromium.org/2018223002 back to M51 and M52.
Correction to comment 89, second paragraph: With 5 windows of 10 about:blank tabs, but *without* TM running, top shows the same low Browser CPU as before I opened those 50 tabs.
Has bad performance been seen in M53 dev and M52 beta? If not, merging cl in comment 90 into M52 might not be needed.

Comment 93 by plakal@google.com, Jun 28 2016

FYI: I upgraded M51 stable today on my Pixel 2, and the CPU pegging due to tab throbbing seems to have decreased: now I only see 50-60% CPU usage for Browser in Task Manager, compared to 90-100+ in M50. The system feels more responsive and usable than with M50 in the same situation.

Comment 94 by dimu@google.com, Jun 28 2016

Labels: -Merge-Request-51 Merge-Review-51 Hotlist-Merge-Review
[Automated comment] Request affecting a post-stable build (M51), manual review required.

Comment 95 by dimu@google.com, Jun 28 2016

Labels: -Merge-Request-52 Merge-Approved-52 Hotlist-Merge-Approved
Your change meets the bar and is auto-approved for M52 (branch: 2743)
Project Member

Comment 96 by bugdroid1@chromium.org, Jun 29 2016

Labels: -merge-approved-52 merge-merged-2743
The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/ff110ea90bc1a199c903853d72a6641519e414f2

commit ff110ea90bc1a199c903853d72a6641519e414f2
Author: John Bauman <jbauman@chromium.org>
Date: Wed Jun 29 01:12:30 2016

Clear ui::Layer damaged_region_ after commit.

This was only being cleared for ui::Layers with PictureLayers and not
those with TextureLayers, so all frames afterwards would have
unnecessary damage.

BUG= 610086 , 611127 

Committed: https://crrev.com/5ab42edc1361fee3b5255dd36bc8663dfb44cb5e
Cr-Commit-Position: refs/heads/master@{#396936}

Review-Url: https://codereview.chromium.org/2018223002
Cr-Commit-Position: refs/heads/master@{#397529}
(cherry picked from commit ac619ac038346cc2ea96c469ab0bcd23bd13d51c)

Review URL: https://codereview.chromium.org/2108043002 .

Cr-Commit-Position: refs/branch-heads/2743@{#515}
Cr-Branched-From: 2b3ae3b8090361f8af5a611712fc1a5ab2de53cb-refs/heads/master@{#394939}

[modify] https://crrev.com/ff110ea90bc1a199c903853d72a6641519e414f2/ui/compositor/layer.cc
[modify] https://crrev.com/ff110ea90bc1a199c903853d72a6641519e414f2/ui/compositor/layer.h
[modify] https://crrev.com/ff110ea90bc1a199c903853d72a6641519e414f2/ui/compositor/layer_unittest.cc

Status: Fixed (was: Assigned)
So I think this is fixed: Root cause determined. I guess something is using a different layer type (solid color probably) in the UI now. But that's not a problem, it just exposed the bug.
Also, thank you sky@ for figuring this out and finding the fix.

Comment 99 by plakal@google.com, Jul 22 2016

Any estimate of the stable channel version in which this fix will appear?  
It went to branch 2743 in beta everywhere, and in stable on some platforms according to https://omahaproxy.appspot.com/ (maybe you have to force the update still since it looks new).

Comment 101 by plakal@google.com, Jul 22 2016

Thanks.

Looking at all changes in branch 2743 up to the latest beta channel update on ChromeOS 52.0.2743.85:
https://chromium.googlesource.com/chromium/src/+log/52.0.2743.0..52.0.2743.85?pretty=fuller&n=10000 
looks like commit ff110ea90bc1a199c903853d72a6641519e414f2 first became public in version 52.0.2743.59.

It'll probably be a few weeks before M52 hits ChromeOS stable.

Comment 102 by plakal@google.com, Jul 27 2016

That went much faster than I thought, M52 just hit ChromeOS stable.

On my ChromeBox at least, after updating to M52 stable, the problem looks fixed, throbbing tabs no longer hog CPU.  Thanks!

The excessive layer pushes per commit is solved in M52 on my Chromebook --
now only one layer gets pushed during pinned-tab throb animation.

But a second layer of this issue is still present -- commits are still
heavy enough to use significant CPU when commits are done rapidly, as in
the pinned-tab throb.

In M52, pinned-tab throb for me uses ~20% of one CPU core per window that
has a throbbing tab, so two windows use ~40%. This is due to rapid commits
associated with the throb animation, even though each commit only pushes
one layer. Pinned-tab throb, specifically, will presumably be fixed when in
future when it's no longer an animation.

The tab loading animation (spinning circle) also causes a lot of CPU, also
because it causes rapid commits. Is there a plan to deal with this?

Generally, is there a plan to alleviate high CPU use with rapid commits?
Drawing the animation and uploading it does take CPU time. Reducing that can be done by changing to animations that don't require us to raster anything (animate transform only). I think what you're seeing is https://bugs.chromium.org/p/chromium/issues/detail?id=391646
Cc: -mshe...@chromium.org
Labels: VerifyIn-54
Labels: VerifyIn-55

Comment 108 by dchan@google.com, Nov 19 2016

Labels: VerifyIn-56
Labels: -Hotlist-Merge-Review -Merge-Review-51
No longer relevant for mergint to M51.

Comment 110 by dchan@google.com, Jan 21 2017

Labels: VerifyIn-57
Labels: -Performance-Sheriff-Regressions Performance-Sheriff

Comment 112 by dchan@google.com, Mar 4 2017

Labels: VerifyIn-58

Comment 113 by dchan@google.com, Apr 17 2017

Labels: VerifyIn-59

Comment 114 by dchan@google.com, May 30 2017

Labels: VerifyIn-60
Labels: VerifyIn-61
Status: Archived (was: Fixed)
Showing comments 17 - 116 of 116 Older

Sign in to add a comment