New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 713470 link

Starred by 4 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Android
Pri: 2
Type: Bug



Sign in to add a comment

Twitter light fling animation isn't smooth

Project Member Reported by rbyers@chromium.org, Apr 20 2017

Issue description

Chrome Version: 58.0.3029.70
OS: Android on Nexus 6P

What steps will reproduce the problem?
(1) Load mobile.twitter.com
(2) Scroll up and down with some short / slow flings

What is the expected result?
Expect the fling animations to be perfectly smooth (as they are, for example, on a long wikipedia page).  We may need to checkerboard, but we should always be prioritizing driving the fling animation, right?

What happens instead?
See lots of short stalls during the fling that results in the animation not feeling smooth.

My sense is that we're dropping at least one frame out of every ~10 and occasionally dropping several frames.

From the attached trace I see the following potentially interesting things:
- Frequent GPU main thread tasks >30ms - NativeViewGLSurfaceEGL:RealSwapBuffers
- Attached histogram of DisplayScheduler:pending_swaps times (is this the right way to visualize frame times?) shows about 5% at ~26ms
- Occasional ~10ms FrameHostMsg_UpdateState tasks on the browser main thread (probably an independent issue)


 
trace_twitter.json.zip
4.9 MB Download

Comment 1 by rbyers@chromium.org, Apr 20 2017

Screen Shot 2017-04-20 at 10.42.50 AM.png
88.1 KB View Download

Comment 2 by rbyers@chromium.org, Apr 21 2017

Cc: slightlyoff@chromium.org flackr@chromium.org vmi...@chromium.org
Labels: Hotlist-ThreadedRendering
Given how close Twitter light is to the native experience already and the press around it, I think it's strategically important that we try to fix this.

Comment 3 by vmi...@chromium.org, Apr 22 2017

Cc: sunn...@chromium.org ericrk@chromium.org
Components: -Internals>GPU Internals>GPU>Rasterization Internals>GPU>Scheduling
Status: Available (was: Untriaged)
On the GPU side, there are some long bursts of GPU decoder work (such as 5,206.231 ms) that Sunny's GPU scheduler work should help.

I'm also seeing some long calls to GLES2DecoderImpl::ClearLevel.  Normally those should be entirely avoided, so we should look at what Ganesh may be doing here.

Comment 4 by klo...@chromium.org, Apr 26 2017

Cc: klo...@chromium.org
Indeed, there is a bigger difference between Ganesh on and off. When it is off, I do see checkboard/white space, but it is much smoother.

Comment 5 by aelias@chromium.org, Apr 28 2017

Cc: aelias@chromium.org

Comment 6 by aelias@chromium.org, Apr 28 2017

I locally applied Sunny's GPU scheduler patch http://crbug.com/2814843002 and it does clear up almost all of the jank on my Galaxy S7.  I would even say that there are fewer janks than the Android native app.

But this comes at the cost of quite visible checkerboarding if you fling too fast.  The twitter native app never has checkerboard regions per se (although it shows solid colors in place of not-yet-decoded images).

Comment 7 by rbyers@chromium.org, Apr 29 2017

Sounds great, thanks Alex!  Given the excessive main-thread work, lots of checkerboarding perhaps isn't surprising.  I know others are working with Twtitter to reduce the total main thread work they're doing.
aelias@ I made a change to the CL to reduce checkerboarding and main thread latency. There's still quite a bit of checkerboarding but maybe less than before?

Should we deprioritize image uploads and let those only happen when we get out of smoothness takes priority mode? That way we can keep rasterizing more text/paths while scrolling while checkerboarding images and maybe that's a better experience?
Completely suppressing image uploads during scrolling sounds like a too heavy hammer, if that's what you're suggesting.  I'm not sure we fully understand the cause of the checkerboarding yet either?

I think we should probably get the scheduler infrastructure landed first and try out various policy changes after that.
Owner: sunn...@chromium.org
Status: Assigned (was: Available)
It looks like Sunny is the correct person to own this then, until such time as the scheduler infrastructure changes land.
Cc: vmp...@chromium.org khushals...@chromium.org
Image-checkering could help this kind of case, taking the place of what the native app does in place not-yet-decoded images.

Looking at traces though I would say the long main thread script (100~500ms) is the bigger fish to fry.
There are also some CommandExecutor::PutChanged tasks that seem to take a lot of CPU self time (including one in the above trace that's 91ms).  Is that worth digging into separately, maybe adding some sub-traces for?

Sign in to add a comment