New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 41 users
Status: Fixed
Owner:
Closed: Apr 2016
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Bug-Regression

Blocked on:
issue 521364
issue 597758



Sign in to add a comment
Inconsistent image raster at small scale changes
Reported by a...@andyfoulds.co.uk, Mar 21 2016 Back to list
Chrome Version       : 49+
URLs (if applicable) :
Other browsers tested:
  Add OK or FAIL, along with the version, after other browsers where you
have tested this issue:
     Safari:OK
    Firefox:OK
         IE:OK

What steps will reproduce the problem?
http://codepen.io/GreenSock/pen/rexpMG?editors=1000

What is the expected result?
Smooth animation, as in v48


What happens instead?
See above

Please provide any additional information below. Attach a screenshot if
possible.

 
Components: Blink>Animation
Labels: Needs-Feedback
Unable to reproduce the issue on Windows 7, Mac 10.10.5, Ubuntu 14.04 using latest stable 49.0.2623.87, canary 51.0.2687.0 with below steps:

1.Opened URL: http://codepen.io/GreenSock/pen/rexpMG?editors=1000
2.Observed smooth animation same as in M-48 and firefox.

andy@Could you please check the issue on clean profile and update the thread if issue still persists.
This issue only happens in computers that use Nvidia graphics cards and some Intel graphics cards on both Windows and Mac in Chrome 49. 

I tested AMD Radeon graphics cards rendered OK in Chrome 49, on computers with Windows and Mac.

To see this issue you must use a computer with a Nvidia or Intel graphics card:

1) Make sure you have a computer with either a Nvidia or Intel graphics card
2) Open Chrome 49 and open URL: http://codepen.io/GreenSock/pen/rexpMG?editors=1000
3) You will see the flicker in the bottom animation when it scales up. You will see that it flickers and stutters before the animation completes the scale.

Here is also a video of the behavior using that same code example from above:

https://www.youtube.com/watch?v=2Lj37CwCdwg

Any element that uses CSS 3D transforms, basically using matrix3d() or trasnlate3d() show this jitter, flickering jank in Chrome 49. But in Chrome 48 the same animation played smooth.

It is almost like any sub-pixel values used in the CSS transform functions for 3D transforms using matrix3d() or translate3d show this flicker towards the end of the animation.

I tested on Windows 7, Windows 10, and on Mac OSX 10.11.3. All the operating systems had either Nvidia or Intel graphics cards, in Chrome 49 and had this same flicker with the above example.

This behavior does not happen when the transform function uses a 2D transform, like matrix() or translate() on Nvidia and Intel graphics cards. 

So it looks like any sub-pixel rendering in a CSS 3D transform using matrix3d() and translate3d() in Chrome 49. On any computer Mac or Windows that has a Nvidia or Intel graphics card.
I can verify the same behavior as Jonathan on 

Mac OSX 10.9.5
NVIDIA GeForce GT 650M 1024 MB
Chrome 49.0.2623.87 (64-bit)

Everything was fine in Chrome 48.


Here is a demo that does not use GSAP or any JS library.
Just a simple RAF animation with transform:matrix3d()

http://codepen.io/anon/pen/LNyYrX

Notice the <img> and <div> render very differently. The <div> is very shaky.
Here is an unlisted YouTube video showing the jitter with Chrome 49 and Nvidia card from the codepen demo above:
https://www.youtube.com/watch?v=u7j49Kl4KLY&feature=youtu.be
Cc: ajuma@chromium.org
This looks like it may be related to raster-scale.

Ali, could you confirm or deny and update accordingly.

It also looks more like a lowdpi device issue than specifcally video card related.
Comment 6 by flackr@chromium.org, Mar 23 2016
Cc: rnimmagadda@chromium.org chrishtr@chromium.org loyso@chromium.org danakj@chromium.org
Issue 594258 has been merged into this issue.
Comment 7 by ajuma@chromium.org, Mar 23 2016
Looks like this is indeed related to raster scale. I bisected it to:
https://chromium.googlesource.com/chromium/src/+log/b4f0e9f9ca28367c8b952894b5ef34fa70e956d4..9db7df5bc282f4e3d9fb06a0f9ea73e0fbf2d3af

So I think this is likely https://chromium.googlesource.com/chromium/src/+/66978ff65abbb473f8662444c2dc0823e0f45795 ("cc: Stop locking the raster scale factor at 1 after any change.").
Status: Untriaged
Comment 9 by danakj@chromium.org, Mar 23 2016
If it's from that: Is there some reason why the page can't use an accelerated animation to change scale instead of using javascript to do it each step then?
@danakj@chromium.org, by "accelerated animation", do you mean CSS and/or Web Animations API? If so, there are many reasons why those aren't adequate. Physics, advanced easing, scroll-centric, and compatibility among them. JavaScript animation (requestAnimationFrame-driven) is essential these days. 

In fact, it'd be super duper awesome if browsers granted more direct access to altering each component of the transform matrix rather than having to construct the entire string (recasting numbers into strings...16 of them in 3D) and then having the browser parse them back out again. Seems incredibly wasteful, but alas, I digress. I'm sure we need to keep this thread focused on just the issue at hand. Sorry. :)
Comment 11 by loyso@chromium.org, Mar 24 2016
greensockjd@, feel free to describe your critical use cases on composited animations in this group:
https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev
Thanks!
> and then having the browser parse them back out again. Seems incredibly wasteful

Providing that information to the browser gives valuable data so it can make decisions on how to raster, and at what scales. Otherwise the choices are either raster at the correct scale never or always, more or less. And never was bad for a lot of sites, since you just have blurry content.
@danakj@chromium.org I probably did a poor job of describing what I meant. I know this thread is about something completely different, so I'll make this short and we can pick it up elsewhere if you prefer.

Let's say an element has a transform of "matrix3d(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1)" and I only want to change one or two of those numbers (actually, the quantity doesn't matter much). Let's say I've got all those values in JS as numbers. To update during an animation, I currently need to construct a lengthy string (eating up memory and hogging CPU resources) 60 times per second and set the style.transform to that new string which contains mostly the same numbers anyway. Since the matrix values probably just get shuttled to the GPU anyway and they'd never affect layout, it sure seems like it'd be far more efficient to let me to do something like element.matrix3D[6] = 1.2; (to change the 7th number) and the browser could skip a lot of steps (parsing the string, determining which changed, etc.). I'd love to make GSAP tap into that kind of capability under the hood. I bet it'd make animations even more performant. 

Anyway, sorry about the thread hijacking. You can totally ignore this if you'd like. I just wanted to throw the idea out there. Feel free to reach out to me directly if you'd like to discuss more. 

To be clear, this has nothing to do with the original reported issue of the jumpy display while changing transforms in the latest Chrome. 
Cc: vollick@chromium.org
Components: -Blink>Animation Blink>Compositing
Labels: -Type-Bug -Pri-3 -Needs-Feedback Pri-2 Type-Bug-Regression
Status: Available
Relabelling according to #7.

#13: This sounds like a desire for vollick's compositor worker idea; allowing Javascript to animate layers on the page without triggering style + layout recalculation.
Hi Chromium team,

Thanks so much for looking into the issue.

@danakj@chromium.org, yes the animation provided could be achieved other ways but the demo was provided as a reduced example that clearly shows the issue: 

Chrome 49 handles the scaling of background-image different than <img> and the results for background-image are noticeably worse. This discrepancy did not exist in previous versions of Chrome. Safari and FireFox have no such issue. 

Many professional animators rely on standards-based RAF animations for many reasons,  but as greensockjs mentioned that's probably a longer conversation.


@#15, prior to that change, scale changes would just leave the result blurry forever, so it's hard for me to say that's a better result.  Using accelerated animation will give you back the performance and without making it blurry.

We considered doing something lazy-like about raster, but there have been very few cases reported that were not doing accelerated animation of scale, and none that couldn't just be switched to use that instead so far (assuming this is one also).
Can you clarify what you mean by "accelerated animation"? Can you take the demo provided and make the necessary changes to get "accelerated animation" to solve it? 

Also, are you saying that you don't consider this new behavior to be a regression/bug? We've heard from several users about their existing animations suddenly looking much worse in Chrome (and only Chrome), and it appears to be rendering differently between an <img> and a background-image; is that desired new behavior that's somehow considered "better"? Sorry if I'm being a bit thick-skulled here - I feel like I must be missing something.

Even if switching to "accelerated animation" (still not sure what that means) fixes it, are you saying that all sites that suddenly have jerky animation due to this Chrome 45 bug...er...change...should re-code their animations to work around it? 
> Also, are you saying that you don't consider this new behavior to be a regression/bug?

Correct, you can take a look at bugs 556533, 413636, 368201, and 542166 for examples of what that CL fixed.

Here's a site about accelerated animation I got from a google search for the term: http://www.sitepoint.com/introduction-to-hardware-acceleration-css-animations/

Sites that want to animate should indeed animate with accelerated animations. This will also straight up improve their framerates on machines where the main thread is unable to keep up, and lower their power requirements.
Oh! You just mean animating "cheap" properties like transforms instead of layout-affecting properties like top/left/width/height? Yes, that's exactly what this issue is about. It's animating the transform of the <div> with a background-image. 

I didn't intend for this thread to become about which properties perform better than others or how developers should code their animations (although that's a topic I'm intensely passionate about and interested in) - this is about a regression in Chrome 49 that makes animating transforms on an element with a background-image render in a jerky/blurry manner. And again, as far as I can tell this example is already using "accelerated animations" and it looks quite bad (though before Chrome 49 it looked great). 
@danakj, if you consider what we see as jerky rendering with background-image an improvement can you also elaborate on why you don't apply the same "optimization" to the <img>.

I really want to understand why 2 elements that are being animated the exact same way with the exact same code appear differently. 

Thanks!


I am a designer/developer that builds banner ads as well as web apps. 
We crank out 100's of banners a month and they are built with much care like an application would be.

We rely on TexturePacker CLI to automate CSS sprite sheet creation which utilizes the background-image exclusively. 
After speaking with the developer there are no plans to have an option for img tags for CSS sprite sheets. We would have to create these by hand via object-position etc...lame. 

Anyways, this fix seems doable. If theres a vote I vote YES. 
I hope it's resolved. I'm not excited about coming up with a new way to do CSS sprite sheets.

Thanks everyone



Experiencing the same issue. I work in banner animation and this rendering issue is really impacting our workflow. 

OS MAC
OS Version 10.11.1
Processor: 3.4 GHz Intel Core i7
Graphics: AMD Radeon HD 6970M 1024 MB
Browser: Chrome Version 49.0.2623.87 (64-bit)

Would appreciate a fix for this!
Thanks
Comment 23 by qpoyn...@gmail.com, Mar 24 2016
I to have just noticed this issue starting.   I to use spritesheets to animate raster elements.   Please help get this fixed!
OSX 10.10.4 (14E46)
Chrome Version 49.0.2623.87 (64-bit)
Intel Iris Pro 1536 MB
Thanks for the clarifications, IIUC, the bug being reported is that the image kinda jumps around while it's animating? It looks in the video like there is some rounding happening instead of smooth transition between integer values or something.

I don't see this reproducing on TOT. Do people see this happen in Canary?
Oh, I take that back, I was trying the wrong version, I can definitely repro.
Cc: enne@chromium.org
Components: -Blink>Compositing Internals>Skia
Labels: -Pri-2 M-49 Pri-1
Owner: fmalita@chromium.org
Status: Assigned
Summary: Inconsistent image raster at small scale changes (was: Jerky pixel-snapping in Chrome 49)
The raster of the image at different scales is being pretty inconsistent. Sometimes you make the matrix scale smaller and the image gets bigger or moves around, for some values.

I loaded the attached skp in skia debugger and can reproduce if I change the zoom factor to 1.01.

#define ZOOM_FACTOR (1.01f)

It's most obvious when you have your mouse centered over the image while zooming, but it still does look bad in the above html demo when you put "transform-origin: 0 0;" on the background-image div.

Here's some matrix scales values where it demonstrates problems:

Inital value:
1.2447 0 52.724
0 1.2447 210.13
0 0 1

One step of zooming out, should make the image smaller, the top of the image appears to move vertically up (which should be impossible while shrinking):
1.2324 0 54.677
0 1.2324 210.47
0 0 1

One more step of zooming out in the same position, now the top of the image goes back down:
1.2202 0 56.611
0 1.2202 210.8
0 0 1

So these 3 frames of a shrinking animation will jump around visually.

fmalita@ can you have a look or re-assign to someone who can make this better?
image.skp
4.1 KB Download
Labels: -M-49 ReleaseBlock-Stable M-50
I filed https://bugs.chromium.org/p/chromium/issues/detail?id=597758 about using the <img> tag backend optimizations for background-image, which would have the side effect of making this look better without fixing rasterization problems. But is maybe (probably?) a harder thing to merge to M50.
Cc: vmp...@chromium.org reed@google.com
Some preliminary notes:

1) only occurs with software rasterization, which may explain why the issue is reported against specific GPUs (when Ganesh is disabled/blacklisted?)

2) @danakj how did you capture the SKP and whack the matrix?  I wouldn't put too much faith in the Skia Qt debugger being accurate, or are you talking about the paint debugger build into FrameViewer/Timeline?  Depending on how you've captured the SKP, it may not reflect what is really going on in the browser (IIRC printToSkp bypasses vmpstr's image_decode_controller and relies on native/lazy decoding).

3) as an experiment I disabled the image decode controller (hack pasted below, I'm sure there's a better way), and the jankiness went away; the transition is still not as smooth as the layer/accelerated version, but most of the geometric instability seems to be gone.

--- a/cc/playback/display_item_list.cc
+++ b/cc/playback/display_item_list.cc
@@ -309,6 +309,7 @@ void DisplayItemList::GetDiscardableImagesInRect(
 }
 
 bool DisplayItemList::MayHaveDiscardableImages() const {
+    return false;
   return !image_map_.empty();
 }


Still looking, but based on #3 I'm suspecting a problem with CC's image decode controller at this point.
Cc: fmalita@chromium.org
Owner: vmp...@chromium.org
> 2) @danakj how did you capture the SKP and whack the matrix? 

I captured it from a trace, by dumping from the DisplayList dot in the frame viewer.

I played with the matrix by using the zoom feature in skia debugger. The tracing frame viewer was not showing anything transformed, and uses a different path entirely anyways, so I'd trust it less if it was working I guess.

I thought about image decoding, but I'm not sure what decode timing would have to do with the pixel output correctness though, maybe it does have something?

> 3) as an experiment I disabled the image decode controller (hack pasted below, I'm sure there's a better way), and the jankiness went away;

Uh.. huh ok! maybe it's when we choose to decode at different scales? vmpstr@ please have a look and confirm/disprove :)
Oh, FWIW this problem happened since M49, but image decode controller is new right? But.. Idk how this is all interacting.
Disabling IDC certainly seemed to improve the transition for me (maybe someone else can confirm I'm not hallucinating), but you're right that IDC is not present in M49.  We did land some Skia scaling fixes in the meantime, maybe that could explain why ToT%IDC > M49?

Here's my bedtime theory (too late to check now, documenting mostly for my fresh Friday self):

a) Skia's uses Mitchell filtering for scales > 1 and mipmaps + lerp for downscaling

b) this means we're switching scaling algorithms around scale == 1.

c) which is OK for normal transitions (crossing scale == 1 once/rarely)

d) but IDC decodes to the closest int size, and then post-applies the residual scale

e) which can be > 1 or < 1 depending on CTM/rounding

f) this means we're potentially switching between Mitchell/mipmap scaling *every single frame*

If this turns out to be the cause, one possible fix might be to the lower filter quality when applying the residual scale (kLow_SkFilterQuality => force bilerp for both upscaling/downscaling).


Most of the above holds, but IDC is already using kLow_SkFilterQuality when drawing prescaled images - so f) is incorrect.

After more experimentation, I'm seeing at least 3 factors affecting the animation smoothness:

1) IDC
2) whether or not the scaled image gets its own composited layer
3) whether or not we're upscaling

I mentioned #1 before.

For #2, the presence of a matrix3d pushes the image into a separate layer and I suspect we're hitting something like http://crbug.com/406529 (the layer subpixel phase is not communicated during rasterization and we end up post-filtering the scaled image as the layer is composited).

#3 comes into play after removing the other two variables: when upscaling using kHigh_SkFilterQuality, Skia does something similar to IDC: selects the closest int-size for dest, scales to that size *ignoring subpixel phase*, then applies the residual transform using a bilerp.  So we're essentially filtering twice: once with a HQ bicubic filter producing a cacheable int-sized bitmap, and a second time with bilerp to map this rounded scale to the actual dest matrix.  It's unclear to me whether this approach could ever yield smooth transitions (scaled size snaps from one int to another and bilerp may not compensate for discontinuities, particularly when applied with a different subpixel phase?).  I'm also starting to think this is the same fundamental problem as IDC: we're scaling to int sizes ignoring subpixel positioning (for caching purposes), and then bilerp as a second pass to apply the residual transform.

(#3 is likely what danakj was experiencing, using scale > 1 and HQ)



I've updated the two tests to also include a SVG scaled version of the image, as a way to isolate #2 (SVG scaling doesn't force a composited layer):

a) http://codepen.io/anon/pen/LNLpPj

(note: the scale for this test is always < 1, so we're not hitting #3)

With IDC disabled the SVG animation looks perfectly smooth to me, while the background-image is significantly blurrier and with some jank.  I blame the difference on #2.

b) http://codepen.io/anon/pen/EKXjRP

With IDC disabled, the SVG animation is smooth until it gets to scale > 1, when it start hitting #3

So AFAICT there's a lot going on, and I don't think this is fixable in time for M50 (I'm not even sure what the fix would look like short of forcing kLow_SkFilterQuality and scaling directly into dest - but that conflicts with non-animated quality expectations and the IDC architecture).


@fmalita: how about using kLow_SkFilterQuality during the animation as a stop-gap for M50? We already do this in software mode via ImageQualityController (though just for the performance reason).

Or would scaling directly also into dest be required? Not sure what that would entail.
In terms of getting it into M50, is there any way to just revert to the way things were handled before Chrome 49? This regression seems like a pretty big deal for a lot of people (sprite sheets are very common). I'm sure it's more complicated than just rolling back some code, but anything you can do to get this resolved in the next release would be really appreciated. 
@chrishtr I can try to see if that helps, but yes, for optimal results I think we need to a) avoid double filtering (meaning we scale directly into dest with full CTM visibility) and b) use kLow (or kMedium - that should also produce continuous transitions)
@#36 that would regress a lot of other sites :/
Re 37: ok can you try it and let me know?
Owner: fmalita@chromium.org
I agree with comment 34, #3. I also see the same behavior with or without IDC enabled. One thing that helps is instead of doing snapping to the nearest int, we can always round up.

Currently we have this:
We scale to size X, and bilerp up to X + 0.5
The next moment, we scale to size X + 1, and bilerp down to (X + 1) - 0.5

If we do this instead, then it seems to produce smoother results:
Scale to X + 1, and always bilerp down to (X + 1) - fraction.

So instead of smoothness jump happening at 0.5 boundary and bilerp switching from upscaling to downscaling, it would happen at integer boundary and we'd always be bilerping the scaled thing down to the correct subpixel level.

Anyway, I have this patch, so maybe someone can confirm that it looks better? https://codereview.chromium.org/1838553003

Switching animated images to use low filter quality would work as well (that's what seems to be happening with the img tag, since it's going through a different layer). 
Re #40: it's better but still not perfect.
Yeah I think to get perfect we either have to scale to the precise subpixel size or to use medium/low filter quality for these cases.
Blockedon: 521364
Owner: chrishtr@chromium.org
@vmpstr yes, we do get int-snapping + double-filtering for HQ with/without IDC but for exclusive reasons:

  - with IDC, IDC itself does it while Skia gets out of the way (it's not caching the scale result)
  - without IDC, Skia attempts to cache the scale result and does pretty much the same thing

Damned if you do, damned if you don't :)

One difference is that Skia only does this for HQ upscaling, while IDC also does it for HQ downscaling IINM.



@chrishtr: lowering the filter quality does help - but only with issues #1 & #3 in c#34 (neither IDC nor Skia tries to cache/prescale in LQ mode).  We're still left with the composited layer subpixel artifacts (#2) though.

The easiest way to test is using image-rendering: -webkit-optimize-contrast; which forces kLow_SkFilterQuality in Blink.


* Original test, with a LQ column: http://codepen.io/anon/pen/VaWbKK

Notice that the SVG <image> is now as smooth as the <img> version, but background-image has some blurriness issues.  The difference is that background-image gets its own composited layer => so we're hitting issue #2 (if you look closely, when the animation settles for a sec and we drop the layer, the image gets noticeably sharped as it snaps back into the base layer).


* JS-only scale test, with a LQ column: http://codepen.io/anon/pen/mPwmXB

Again, the SVG version is as smooth as <img> (for both scales < 1 and > 1), while background-image suffers from #2.


* JS-only scale test with a LQ column, using matrix instead of matrix3d: http://codepen.io/anon/pen/GZEmEM

Finally, all three versions look smooth with LQ - because (unlike matrix3d) matrix doesn't force a composited layer for background-image.


So we have a potential author workaround:

  - use transform: matrix instead of transform: matrix3d
  - use image-rendering: -webkit-optimize-contrast


(the first part is a bit fragile: in my testing, background-image layerization appears to also depend on other factors/elements, not just on matrix-vs-matrix3d)

As far as the Chrome fix goes, I think this reinforces our initial thought: use low or medium filter quality for images with animated transforms.  And deal with issue 521364.

The tricky part in my mind is detecting non-declarative/JS-driven animations.  Maybe some of CC's logic removed in 
https://chromium.googlesource.com/chromium/src/+/66978ff65abbb473f8662444c2dc0823e0f45795 could be reused for this purpose?  Or maybe IDC could somehow detect/help with this?


Comment 44 by amin...@google.com, Mar 28 2016
Labels: OS-All
A friendly reminder that M50 Stable is launching soon! Your bug is labelled as Stable ReleaseBlock, pls make sure to land the fix and get it merged into the release branch by Apr-5. All changes MUST be merged into the release branch by 5pm on Apr-8 to make into the desktop Stable final build cut. Thanks!
> The tricky part in my mind is detecting non-declarative/JS-driven animations.  Maybe some of CC's logic removed in 
> https://chromium.googlesource.com/chromium/src/+/66978ff65abbb473f8662444c2dc0823e0f45795 could be reused for this 
> purpose?

The logic there was if it ever animates (changes scale) once, consider it animating forever.
Yeah, that's a pretty big hammer.  I have an experimental hack based on that idea, clamping the filter quality to kMedium during scale "animations": https://codereview.chromium.org/1839143004/

It helps (not with problem #2 though), but it feels wrong on several levels.  Someone familiar with CC layer transforms/animations can probably figure a better place to implement the heuristic.  Ideally it should also involve an expiration timer (similar to Blink's ImageQualityController) to inval & re-raster with HQ when the scale stops animating.
> Someone familiar with CC layer transforms/animations can probably figure a better place to implement the heuristic.

I think only blink can make any kind of guess (if it can even? based on RAF maybe?). CC just gets a transform set on it each frame.
I really appreciate all the effort that has gone into researching this issue. 
I have to admit that most of the tech talk is way over my head.

However, the author-workarounds seem less than ideal

- use transform: matrix instead of transform: matrix3d

very likely that 3d and scale will be animated at same time

  - use image-rendering: -webkit-optimize-contrast

I saw improvement with this setting but it doesn't seem right that authors should need a vendor-specific setting just to get one browser to behave properly for one type of animation. (safari, firefox, edge have no problem)


I'm not sure if it helps but the performance is greatly impacted on larger background-images severely reducing frame-rate.

Here is a video: https://youtu.be/8w12f0CYyEY

demos: 
<img> looks great: http://codepen.io/GreenSock/pen/JXyLXb
background-image bad: http://codepen.io/GreenSock/pen/BKdrLq/

Thanks again for all the effort. Very impressed with the attention this is getting.
Re comment 49:

1. Using image-rendering: -webkit-optimize-contrast is equivalent to use image-rendering: crisp-edges, which is not prefixed. Blink has not yet implemented the unprefixed version. Controlling 

2. The workaround suggesting non-3D matrices is just until issue 521364 is fixed. We're not claiming this is the end of the bug.

3. image-rendering: -webkit-optimize-contrast will make http://codepen.io/GreenSock/pen/BKdrLq/ much faster because it uses a low-quality filter which is much faster. In that example the quality of the image rendering
looks pretty good to me.
Blockedon: 597758
Update: even if issue 521364 is fixed according to my plan outlined in that bug, animating with 3D will have subpixel blurring issues. And animating images will have
jitter issues as indicated in this bug. The latter can be addressed by issue 597758, which
I should fix also.
May it be related to https://bugs.chromium.org/p/chromium/issues/detail?id=531290 somehow?
M50 Stable is launching very soon! Your bug is labelled as Stable ReleaseBlock, pls make sure to land the fix and get it merged ASAP. All changes MUST be merged into the release branch by 5pm on Apr-8 to make into the desktop Stable final build cut. Thanks!
Labels: -M-50 M-51
Moving to M51, this will not be done in time for M50.
Labels: -Pri-1 Pri-2
Status: Fixed
We ended up reverting the change that caused this bug. The change was merged into
Chrome 50. The animations you are testing with should now have their original
behavior.

I'm going to close this bug, but please check out issues 521364 and 600867 for
future changes.
I just tested the demos I had provided in 50 and I'm very pleased with the results. 
Big thanks to all that devoted so much effort. I know a lot of people will be happy!
Great, glad to hear it.
The bug is back in 53.0.2785.116 Win7.

http://codepen.io/GreenSock/pen/rexpMG
We re-launched the change, but this time added an easy way to opt out of it by adding
will-change: transform to your elements. See:

https://docs.google.com/document/d/1f8WS99F9GORWP_m74l_JfsTHgCrHkbEorHYu72D4Xag/edit

I reached out to a number of developers about it in advance. My apologies if I
missed reaching out to you.
So once again we have to trawl through older sites to update everything.
I don't believe I'm alone in thinking this wasn't broke and so why the hell 'fix' it?
Please - 'do no harm...'
No matter which way this goes some sites are broken. At least now sites aren't randomly blurry, and have the ability to get the behaviour they want. So, sorry this isn't a case that nothing was broke.
The flicker is the issue. I have found will-change to have unusable effects if the scale is too extreme for it.
Everything was fine before v51 I believe so these are issues that were created by subsequent changes.
In your use case, I'm sure everything was fine before Chrome 51. We decided
to make this change because it fixes a large number of sites, and this time has
an easy to implement opt-out for sites that want the old behavior.

I sympathize that you need to update a number of existing sites to retain good rendering
for your use case. Sorry about that. We spent a lot of time going through alternate approaches,
and this was the best one.
Possibly I missed something earlier up on this lengthy thread but what was the issue that need the attention in the first place?
It looks like the new behavior samples the image at its initial transform state instead of at its native (100% scale) size, leading to much blurrier content if there's an initial reduction in scale. For example, if there's a scale(0.4) initially, and then one scales to "1", it's quite blurry even though it ends up at the native size. I'd expect that it'd sample at either the native size or the initial transform, whichever is BIGGER. Am I misunderstanding something? 

I've gotta admit, this new behavior strikes me as very undesirable. 
The core issue we fixed is not recorded in this bug. Your bug was an
side-effect of the change. The issue we fixed can be demonstrated via this
example:

http://jsbin.com/vutotadido/1/edit?html,output

Before Chrome 53, the "No longer blurry after proposed change" text was also
blurry, and developers would have to do a lot of work to force it to stop being
blurry. In some cases it was nearly impossible to force it to be crisp. This
affected a number of sites, including many that accidentally triggered it. We
fixed it by forcing re-raster by default when you change the scale of a composited
layer, unless will-change: transform is specified.
Re comment 65: are you talking about an example with will-change: transform?
Please post the example so we can have the same frame of reference.
I think I was referring to the same effect comment #65 mentions.
An element that started the animation with .1 scale retained that blurred appearance at the end of the animation.
Yes, if you add  will-change: transform to the animation referenced above, http://codepen.io/GreenSock/pen/rexpMG you should see what I mean. Image is very blurry at the end even though it's scale(0.7). I'd expect that anything at or below scale(1) would be razor-sharp. 
Please, fix this problem on the next version if possible. :)

The use of "will-change" is pre-caching pixel asset. So scaling it from < 1 to 1 will make it pixelated.

use -webkit-perspective: 1000; to avoid the glitching of slowly scaled assets.

Re comment 69: you're running into the problem that will-change: transform
never rasters on scale change. We used to have a heuristic to re-raster at
scale 1 (i.e. transform(1)) if scale ever changed. Your options are:

a. start the animation at scale 1
b. More generally: remove (if needed) and re-add will-change transform after a requestAnimationFrame whenever you want it to raster at the current scale.
You can remove will-change from the element when you want it to reraster (maybe every 2x scale change or so, or maybe when it goes from <1 to >=1, whatever heuristic you want).
(Sorry, remove it for a single frame then put it back to continue not rastering every frame.) This is described in https://docs.google.com/document/d/1f8WS99F9GORWP_m74l_JfsTHgCrHkbEorHYu72D4Xag/edit#heading=h.x44xbnpoz4qs also.
WOW, this is very difficult to accept as an "improvement" in any way. Do we really have to mess with removing will-change, letting a tick go by, then adding it back during an animation to make things look acceptable? And this is what the Chrome team thought "enhanced" the most sites in the wild (or caused the fewest problems)? 

So would it be accurate to say that the heuristic that Chrome previously implemented natively to make things look good in the most cases has now been removed and delegated to developers for them to do manually via JavaScript? I'm kinda hoping that I misunderstood.


You are correct. We replaced a heuristic that works some of the time and has
unrecoverable quality problems in many cases with a setup that looks good by default and works all of the time with a small amount of additional effort by
developers of animation libraries (such as you).
Comment 76 Deleted
Wonder what changes we'll need to make to accommodate the next version?
How exciting...
I'm not sure "a small amount of additional effort" accurately describes what this offloads to us. 

This feels VERY hacky to me, almost like an abuse of "will-change". Functionally, it's making it act more like "immediately-rasterize-at-current-state". Is that what the spec dictates? (Doesn't look like it to me). Have all the implementation details been worked out and agreed upon by the major browsers? It's a little scary to work with this because it feels so unpredictable. The sands could shift beneath us (as today's resurfacing of this issue proves).

Today, it happens to rasterize at current state in Chrome because that's what the Chrome team decided made sense to them; Maybe Firefox thinks "will-change" should have a completely different behavior to "optimize" performance/display. Maybe Edge's implementation directly conflicts with the others. Yikes! Then library authors are left with yet another whack-a-mole scenario, adding various conditional logic, user agent sniffing, etc. in order to coerce the desired behavior consistently across browsers. Extra code that slows things down, bloats things unnecessarily, etc. "a small amount of additional effort"? Due to implementations like this, it sounds like will-change could actually be the source of a lot of headaches for animators (the very people it was meant to help). 

At the very *least*, I'd suggest ensuring that the rasterization is done at or above native size. So in the example above, if the css is initially scale(0.4), it would capture the raster image at scale(1) anyway, thus when things animate up to 1 they're nice and crisp. After all the whole point of will-change is to make animation look **better**, so shoving a natively-sized raster image to the GPU/compositor would probably make the most sense in most scenarios to drive decent animation performance and aesthetics. Agreed? The current behavior assumes that things will either always be scaled  DOWN (not up) or that the developer will implement logic to keep toggling the will-change value to trigger re-rasterization (which would hurt performance!)

Comment 79 Deleted
Sometimes it feels as if both Apple and Google are deliberately making animation for the web more difficult, as if strangling the obvious appetite for animation and bringing back the static web of yore, will make all their devices look better.
It's hard enough explaining to clients why we can not longer achieve animation effects that we could easily do 10 years ago without having to explain why their ads suddenly break from one week to the next!
Surely improving one metric at the expense of another is not improvement at all.
Comment 81 Deleted
Does this also mean that the way we've gotten GPU-acceleration in the past (by using a matrix3d() or translate3d()) no longer works? The only way to ensure that something gets its own compositor layer is by using will-change? Again, I really hope that's not the case - that would certainly negatively affect a LOT of sites. 
Here's a failed attempt to toggle will-change at the start of the tween right after temporarily setting transform to native size to get clean rasterization: 

http://codepen.io/GreenSock/pen/RGoWzP?editors=0010

Expected behavior: rasterization would happen at native size, getting rid of the blurriness

Actual behavior: rasterization happens at scale(0.4) and is quite blurry.

Please advise. If you say that we must allow a tick to go by before proper rasterization at the just-set value, that will be a deal-breaker for a bunch of reasons (I can elaborate if you need me to). 
This video shows what we're seeing (and seems to make a pretty strong visual case that this "improvement" is likely a step backwards for Chrome): https://www.youtube.com/watch?v=iSvUlSpIbNk
Hi,

Thanks for the example and your patience. Please be sure that we are in no way trying to
break any content. Good quality rasterization and performance with compositing is a very
hard problem, especially considering the declarative nature of the web, where explicit control
of rendering is (on purpose) not present.

Here is a tweak of your example that I think looks good. I think you will find that it is a tiny
bit blurrier than the "regular <img>" version. If desired, you can also force a frame of raster
at the end of the animation when it stops moving. While it's animating size, it's often desirable
to have some blur because it can look nice.

http://codepen.io/anon/pen/qaqqqA

This demo re-rasters the content whenever the scale changes more than a threshold (I chose 0.1).

Here are some more answers to your questions:

1. The triggers for compositing have not changed. 3D matrices and will-change: transform are still
both triggers for compositing.

2. It's required to wait for a tick in order for any changes to CSS to take effect. This is why
your demo in comment 83 doesn't quite work. This is not something special to this feature; it's the
way that the web works. In each frame, javascript can make one or more changes to DOM and CSS. Once
javascript releases control to the event loop, the browser draws a new frame with the new values.
Therefore, making a change to a CSS property and then undoing it synchronously before releasing
control will have no effect.

3. Previously, animating a composited layer's scale towards anything other than scale 1 would end
up blurry. In other words, crbug.com/596382 was present for all scales except 1, for previous 
versions of Chrome. Now, in Chrome 53, you can do this to any size, either (a) by Chrome's new
default behavior to raster at every intermediate scale, or (b) adjusting the raster at desired
intermediate points according to what you want in order to balance quality with speed. I hope and
think you will find that this unlocks more power for your library.
I appreciate the detailed response, chrishtr. I have no doubt your team is aiming to improve the way sites render, not degrade them. We've got the same goal. 

The solution you proposed just isn't workable. We place a **HUGE** priority on performance (because animation is the most performance-sensitive aspect of UX) and when I used your exact code, here's a demo that shows how it performs under pressure: 

http://codepen.io/GreenSock/pen/29b28df686420e26dc5c19272156877e?editors=0010

If the goal of this change was to improve the way sites look/render/perform, this is definitely a step backward for animation. Do you have any other recommended solutions that wouldn't kill performance and require manual analysis on every update (60 times per second)? 

1) Are you sure that 3d values trigger compositing? I just checked and it sure looks like that's not working the way it used to - when a translate3d() is set on an element, it's not being properly layerized. This is a big concern performance-wise.

2) You mentioned that it's necessary to wait for a tick to elapse for any CSS to take effect, and I see what you mean in terms of visually being rendered to the screen itself, but I was talking about lower-level. For example, it's easy to change style.width and then run getClientBoundingRect() immediately and it's accurate using the new value. There are many other properties that work exactly like this - it's not like we need to wait for a tick to elapse before the values are reflected. But I also see your point in that what we're talking about here is purely a visual change, thus it'd be hard for you to isolate just that property and have the browser force an internal render and shuttle that new texture to the GPU/compositor. 

Instead, could you maybe implement some other way to tell the browser at what size/scale to perform its rasterization? For example, element.cacheTransform = "scale(1)" would indicate to the browser that when it does its rasterization, it should do so as if the transform was "scale(1)". It could take the same value as any "transform". This gives developers a lot of control and takes the burden off the browser's heuristics. 

I realize that's probably a much more difficult thing for you to push through since it's introducing an entirely new API, so at the very least I'd reiterate the suggestion I made earlier - rasterize at the native size or the current transform, whichever is BIGGER. I think this will solve a lot of the problems the new behavior introduces. Actually, I think it'd be better to revert to the previous behavior until a really solid solution emerges that doesn't cause so many major issues for animators.

I understand that your team thinks this new behavior is a "fix" (overall), but please talk to animators out there and see how they feel about this. I've heard from quite a few people who see this as a significant problem that needs to be fixed ASAP.  
Re performance: yes, re-rastering slows things down. The optimal thing to do performance-wise for your
particular case is raster at the final scale and then animate frames without rastering. I know that this
can be tricky to use without showing an initial frame at the wrong scale to the user. (*)

Re your performance test: in practice one should implement several more optimizations on top of what
I showed in the demo in comment 85. For example, in your case there is no reason to raster again once
an element has been rastered at a scale suitably close to 1. Keeping track of the maximum scale it's
been rastered at should be straightforward. Also, the pure JS overhead of the 60fps callback function in this case
should not be a big deal since you have JS running on every frame anyway. Before the current version of Chrome,
we would raster this example twice (assuming you don't delete and re-create the composited layer on every animation
repeat): once at 0.4 and once at 1.0. And if there are 500 such elements, each would get rastered twice.
So this is only one more raster. Is that really too expensive?

Re rastering at the larger of native size or current, whichever is bigger: the problem is that rastering at
native size and then scaling that down with bilinear filters in the compositor to something less than 0.5 or so
will start to loop noticeably bad. Therefore, while the most common desired scale, scale 1 is still a heuristic.

Re #1: yes I'm sure. You can check with the "layer borders" feature of the rendering section of DevTools.
You can provide an example if you like and I will double-check it for you.

Re #2: yes we considered ways to specify raster scale. But it may not be so easy to get it right in terms of
expressiveness, and requires new APIs, as you mentioned, and also to define what raster scale means,
which seems quite tricky in general, especially while not over-fitting to current implementation strategies.
That might happen later on (and you're welcome to participate in standards discussions about it).

Also, we can't easily do the same thing as the getClientBoundingRect() example for the reason you
mentioned, that raster pipelines are not synchronous, and that it's a unclear to me what would be
the appropriate question to ask for readback to force a frame, if it's not the rastered buffer itself
(suggestions welcome if you have one).

(*) This indicates a use case for offscreen rastering, which has also been proposed from time to time and will
probably eventually happen for content. An implementation is currently in-progress for Canvas elements.)

Again, I really appreciate the discussion and thoughtful responses...

You mentioned that "rastering at native size and then scaling that down with bilinear filters in the compositor to something less than 0.5 or so
will start to look noticeably bad" but how is this any different than if you load a jpg image or something? Clearly Chrome has figured out how to scale things down without terrible artifacts/chunking, so why would it be so hard to apply the same technique to the raster image captured here? Why does every other browser handle this situation beautifully (and Chrome used to as well) if it's so technically difficult? 

Regarding the performance demo, there could certainly be some further optimizations but my point is that none of that should be necessary at all. I also feel like the amount of work is being unrealistically minimized, like it's no big deal, but it's extra logic being run for every single tweening element on every single tick, and it has to store extra values too (increasing memory usage). Sure, for a simple animation where one thing is being scaled, it's not going to be noticeable, but we build our platform to handle crazy amounts of stress because quite frankly sometimes that's what the project requires. Our users aren't satisfied with an answer like "well, yeah, it's 15% slower but hey, it's not too bad...maybe just animate less." I literally hunt for even the smallest optimizations inside the code because they all add up. It's quite painful to think about adding in all this extra logic that [in my opinion] should be handled natively by the browser or at least it should be possible to optimized in a much more intuitive/simple way. Flipping on and off "will-change" at various intervals strikes me as exceedingly unintuitive, hacky, and cumbersome. (I don't mean any of that as an insult to you or your team at all...I know you've got a difficult job and you've thought long and hard about this decision).

As for the readback thing, it feels like a place where there should be a Bitmap or BitmapData API so that you could do element.willChange = "auto", and then var bitmap = element.getBitmapData() or something like that. It'd force the browser to do the rasterization at that time (if any changes were made, of course). I imagine that chunk of data could be super useful to feed into <canvas> stuff. Pixel-level access to how the browser renders DOM. Yummy. 
Re scaling down: yes, Chrome has code to make that look good. But doing so requires rastering to that scale. The
situation we're discussing is what happens when you put will-change: transform on a composited layer (or,
what Chrome did for composited layers prior to version 53 no matter what), which means that raster no longer
happens when it changes scale, but instead just a bilinear filter when drawing to the screen. Allowing the
developer to request raster at a small scale during or after animation is a new feature in this release, for
such use cases.

I don't want to minimize the work needed for you. I know it will not be totally trivial, but hope it won't be
too bad. And even if isn't extremely hard to implement, I realize we are creating work for you. Unfortunately
that is the price we have to pay sometimes for advancing the overall platform's capabilities and performance.
If it makes you feel better, I have spent a whole lot of time on the concerns and code surrounding this issue. :)

The reason we consider will-change: transform as not a hack is that the intention of will-change is to give
a hint to the browser that the referenced property is going to be animated, and for the browser to take steps
to optimize performance for that use case. This is why will-change: transform creates a composited layer:
because animating transform afterward will therefore not later have the startup cost and per-frame of creating
the composited layer and rasterizing it. Following this logic, further extending the meaning of will-change:
transform to not re-raster on scale change is similar, because it will make it faster.

Thought about another way, consider animating a 2D vs 3D transform, *without* will-change. From the point of view
of a consistent platform, the raster quality on every frame should be identical if possible for equivalent matrices,
(*), since it's just a 4x4 vs 3x3 matrix. But prior to Chrome 53, it was not, because 3D transforms stopped rastering 
(except at the magic scale 1) on scale change. With Chrome 53, "transform" just means transform, and
"will-change: transform" means "please optimize the transform for speed". This also allows us to stop compositing 
on 3D transform in the future, for cases when we know that it won't be faster. (**)

The above considerations are all reasons we ended up with the current system.

(*) There are actually some edge case bugs in that at present that we are trying to fix, such as crbug.com/521364

(**) Examples we anticipate include if 3D transform is on an element which doesn't have a very large texture backing, and
we are using GPU-accelerated rasterization, which is often much faster than "software" raster, and often overall faster
than the bookkeeping necessary to handle multiple textures.
The extra work for me isn't a big deal - it's the least of my worries. My primary concerns are:

- performance

- aesthetics (things should look sharp...without developers suddenly having to update their sites because things look terrible now in Chrome)

- Browser inconsistencies and quirks (Chrome is the *only* one who handles things this way, and I have yet to hear from anyone outside the Chrome team who thinks this new behavior is an improvement)

The more testing I do around this new behavior, the more baffled I have become. For example, try applying a simple transform: scale(0.48) and will-change:transform to both an <img> and a <div> that uses a background-image. They look VERY different (see attached) (codepen link: http://codepen.io/GreenSock/pen/ff728786eceffd51a6048a1fe6d09ee5). Is this expected? It sure seems undesirable and awfully frustrating as a developer/designer. The <img> looks great, and clearly Chrome is using an algorithm that produces the desired result, so why use a totally different (fuzzy) algorithm on the <div>? Again, all other browsers render the two consistently and beautifully. Chrome seems to botch the background-image. Am I missing something? 

I see what you mean about extending the intent of will-change to not re-rasterize in order to improve performance; I just think it was taken too far. Yes, the intent is to improve performance. I'd totally understand will-change:transform triggering a composite layer (great!). BUT, reasonable measures should be taken to preserve clarity. For example, by the same reasoning, why not turn off antialiasing altogether? It's faster. Why not downsample to a fraction of the resolution and push those pixels around? That'd save memory and make it faster. Clearly there are things you're choosing to do in order to reasonably maintain visual fidelity - why not extend that here? Instead of having developers manually flip on/off/on/off will-change for every 0.1 scale delta, why not do something like that natively in the browser? At least some minimal levels of effort could be expended to keep clarity when scale is below 1. If scale starts out at 0.1 and then I scale to 1, wouldn't it just make sense that the pixels shouldn't be blown up to 10x normal size? 


rasterization-issues.gif
8.0 KB View Download
Uggh... this is not a good option.

Performance-wise, anyone who built sites using sprite sheets... including some of Google sites will be impacted by this "feature". This forces developers to not optimize their site using sprite sheets. You may inadvertently cause more net requests.

And in terms of aesthetics... It just plain looks awful. User test this. Show it to people and ask them if it looks degraded.

Responding to comment 90:

The issue reported in http://codepen.io/GreenSock/pen/ff728786eceffd51a6048a1fe6d09ee5 is actually unrelated
to this bug. I filed crbug.com/649046 to investigate.

Re why doesn't the browser itself re-raster every so many scale changes: we considered that as well, and went so
far as to prototype some of it. However, we ended up abandoning that approach because we concluded it would be
very difficult to do it in a performant way, because it's impossible for the browser to be sure of the intent of the
web developer in terms of the quality/performance tradeoff. It also introduced a lot more complexity into the
composited rendering subsystem.

Re comment 91: please provide an example.
The new bug you opened (crbug.com/649046) seems to indicate the problem is with <img> but it's the opposite - the <img> looked good whereas the <div> with background-image looked quite bad (see the attachment in comment 90)

In fact, this is making it almost impossible for us to implement a workaround in GSAP, because no matter how often we're forcing a re-rasterization (by toggling will-change back to "auto") on an element with any 3d transform, when it reaches its resting state at scale(1) and does the final toggle to will-change: auto and we remove the 3d transform, there's a **drastic** pixel-shift. Look at my screen-shot that was attached in #90 - it's like it's suddenly changing from the bottom image (blurry) to the top one (crisp). Jarring. Animators will not be happy. 

So once again, it seems there's a Chrome bug in the way this rasterizing is being implemented, and we can't seem to work around it elegantly (...or at all, really). I've already got a patched version GSAP but due to the funky rasterization stuff in Chrome, it feels pretty unusable.

--

You said you tried to implement the re-rasterizing natively and then abandoned the approach because it was too complex and it was impossible to do in a performant way, nor could you accurately determine the developer's quality/performance intent. But then you say that it's a "small amount of additional effort" for us to do this same type of thing in our JS library? And that performance-wise, it's pretty much a non-issue to do at the JS level? Please help me understand the contrast. I'd certainly think native code could run much faster than JS...and what information do we have access to that you don't? 

You whipped together an algorithm for me that, whenever a scale change is made, just checks it against when it was last re-rasterized and if it exceeds a certain threshold, it triggers the re-rasterization. Why not do that natively? Wouldn't it be much faster than layering it on in the JS level and flipping back-and-forth between will-change values (which, again, seems very odd to do during an animation where CLEARLY things are definitely going to change, yet we're telling the browser "just kidding...no really, I will change...no, I won't...yes I will...nope...I can't make up my mind!"(?) It certainly doesn't feel like it's at all in line with the intent of the will-change spec to turn it OFF in the middle of changing (animating)! 
The attachment in comment 90 is I think actually the same thing as crbug.com/649046.
The core rastering system is getting things wrong. I think you will find that Chrome
53 did not regress this (though I am still investigating).
The reason it looks good in an <img> tag in that case is that we have special code just for
<img> compositing.

You can work around crbug.com/649046 by making sure that the element does not lose its
composited backing at the end of the animation (by leaving will-change: transform for
example).

Re why was it hard for us but easy for you: because you can know exactly the intent of
the system and the desired tradeoff. Though since you are writing a general animation
library, you do indeed have a harder time than if you were writing a specific site, and
I expect you will face some of the same challenges as we do in providing a framework for
animation.

Native code is not really so much faster than JS that it's always better to bake things in.
If there is already JS running on each frame, then it's not a problem to run a little bit extra.
It's not at all obvious to Chrome as a rendering engine when JS is animating transform - we'd
have to implement complex heuristics which will sometimes be wrong, and end up with a very
complex system that is still not ideal. We think it's better instead to give the JS developer
control.

Note also that the web and Chrome does in fact also support declarative animations that run on
the compositor thread (CSS transitions and web animations), in order to support use
cases similar to yours. (I know you've chosen not to use this system, which I am aware is limited
in some ways.) In such cases, Chrome chooses performance-quality tradeoffs because it
knows what the animation is doing and can plan in advance. For example, if you set up
a CSS transition of the transform property to animate between a scale of 0.4 and 1, Chrome will
raster at scale 1 and then animate up to it. Chrome can do that because the animation in question
was explained in advance to the rasterization subsystem.

This is analogous to your library, it's just that your library animates from JS. And
I know it's somewhat harder for you, because you can't control raster precisely except via
will-change: transform.
I can't make the argument as intelligently as Greensock has, but Chrome team, PLEASE PLEASE PLEASE revert what you've done in v53 that degrades the rendering of background images during scale animations. After reading all the back and forth from the past few days, I still don't understand what the upside actually was. And weighed against the very noticeable negative effect, it doesn't seem like it was a worthwhile change.
Re comment 95: please provide an example, and why the recommended change we
are asking for to your site does not work.
This isn't really adding anything different than has been demonstrated above, but here's my own codepen for testing background-image animation rendering. Looks good in all my other browsers...just not so hot in Chrome v53 (but it was working very nicely in v52).

http://codepen.io/creativeocean/pen/GjAGxQ
"we have special code just for <img> compositing." - please implement that consistently  everywhere. (Not sure why you'd use a different algorithm for <img>)

--

"You can work around crbug.com/649046 by making sure that the element does not lose its
composited backing at the end of the animation (by leaving will-change: transform for
example)." - This is unacceptable for the following reasons: 

1) It's fuzzy/blurry

2) It can unnecessarily hog memory/resources. My understanding is that it's not good practice to just layerize everything - the GPU has limited memory. The "solution" recommended here means that as soon as an element gets animated once, it'll forever be on its own compositor layer (or else risk the jarring pixel-shift, blurry-to-sharp jump)

--

"we'd have to implement complex heuristics which will sometimes be wrong, and end up with a very complex system" - is it more complex than simply "if the scale changes more than X delta, re-rasterize"? It really doesn't seem that hard. What am I missing? 

--

You mention CSS transitions and Web Animations uniquely being able to communicate intent to the compositor and in the case of scale(0.4) > scale(1) it rasterizes at 1...why is this hidden from JS developers? Why can't I say (via JS) "rasterize at 1" without hacks that require waiting for a frame to elapse, toggling "will-change" (which, again seems to directly conflict with what that property is signaling and I have yet to hear a response to my questions/comments about that)? I'd love to be able to tap into native WAAPI or CSS but both have far too many limitations that make it impossible. 

Again, I'd really appreciate a response to my previous comment/question: "[toggling will-change back and forth] seems very odd to do during an animation where CLEARLY things are definitely going to change, yet we're telling the browser "just kidding...no really, I will change...no, I won't...yes I will...nope...I can't make up my mind!"(?) It certainly doesn't feel like it's at all in line with the intent of the will-change spec to turn it OFF in the middle of changing (animating)!" 

Thanks.



I agree that it would be helpful, as a matter of discovery, to know what bugs were fixed in order to bring this bug into existence. Everyone on the Greensock side is well aware of the capabilities and importance of the library in pushing web animations forward - a mission that Greensock shares with Chrome. 

As a front-ender, the particulars of compositors and rastering aren't really in my wheelhouse, but if previous Chrome versions (51, 52?) were rendering thousands of website elements/ads unacceptably glitchy (or otherwise disabled/fractured), it may help us animators understand the kinds of tradeoffs that the Chrome team is weighing in their decision to go one way or another. Perhaps there's an army of previously irate developers on the other side of this adjustment that are now completely satisfied with the changes.

If that isn't the case, though, I wonder if we can explore a solution that makes everybody happy and keeps animation moving smoothly forward. I understand that the Chrome team is not obligated to cater their development to the squeakiest wheels - but we've rewound this tape before without too much fallout, right? 

Thanks to everyone on both sides for your hard work exploring the best outcomes here.   
Re comment 97: this is how to change it to make it work: a one-liner in your
case. http://codepen.io/anon/pen/ozByOZ
Comment 101 Deleted
I notice if I add to the child elements parent with the background-image and transform scale(), the following CSS properties, the blur is gone. Except maybe for a slight faint blur at the beginning of the animation without using will-change:

.parent_of_child_with_transform_scale {
     -webkit-backface-visibility: hidden;
     -webkit-perspective: 1000;
}

http://codepen.io/jonathan/pen/PGWaLd

Is this expected behavior that works (maybe slightly) with the above CSS properties and without will-change?
@chrishtr, thanks! Going forward, this seems like a simple enough extra step. Only worry, now, is how past animations didn't include this CSS property. That's not a trivial concern. Perhaps the default state will-change:auto should behave like will-change:transform; ?
Re comment 100: Here's what happens when the suggested fix is applied to the same animation, but going from 0.1 to 1 instead of 1 to 0.5: 
http://codepen.io/GreenSock/pen/d454214493bbc4d4dc1d6fb9b734d1e8/

Does the Chrome team plan to tell animators "please go back and edit your sites in cases where there are any scale tweens that are going **up** instead of down; now you'll need to add will-change:transform and the toggle back and forth to will-change:auto during the animation to make it look decent...oh, and at the end make sure you don't remove 3d transforms or you'll get the funky pixel-shift...and sorry about it being blurry...unless it's an <img> in which case it should be okay." (?)

Again, I really don't mean for any of this to sound disrespectful of your team or the hard work you put in. I have no doubt you're diligently trying to find a way to solve a bunch of problems in the most efficient/logical way. I hope you don't mind some dialog about it and a little push-back. Obviously I feel pretty strongly that this is a big step backward, but I'm trying to be open to reconsidering. Thus far I'm still scratching my head.
@chrishtr, I've run into my first real-world test of will-change and it's not pretty. Here are links demonstrating the ill effects of will-change on a background image. 

Link 1 (will-change left undefined): http://creativeocean.com/dev/ace/300x250_v1/

Link 2 (will-change:transform): http://creativeocean.com/dev/ace/300x250_v2/

Because of the degraded image quality produced by will-change (seen in the animated text in Link 2), I think I'm better off leaving that out and just accepting the shakier motion that comes from leaving will-change undefined.

Really, this all brings me back to my original plea; please restore the old background image behavior.
Side note, everything still looks just as good as it used to in v53 of the iOS browser.
Please let me know if anything is inaccurate in this blog post that summarizes this issue: http://greensock.com/will-change/
Labels: -ReleaseBlock-Stable
Hi,

Thank you for continuing to try to find a good solution. I have one update, one clarifying point,
and one question, all related to your blog post:

1. Update: issue 649046 was just fixed (see also comment 92). This is the issue of a down-scaled background
image with compositing looking blurry relative to the same image without compositing, and I think this is
also the case you are demonstrating in your blog post in the "gotcha: never de-layerize" section.
You will be able to verify whether it fixes it for you in tomorrow's Chrome Canary. I'd also like to note
that my suggestion to never de-layerize was a workaround while we fix this bug, not a long-term solution.

2. Clarifying point: you say in the "before Chrome 53" section that no hoops needed to be jumped through.
That is only true for your use case, where you were happy with the scale 1 second-raster behavior.
Chrome never did a good job at other scales. We have received a number of bugs about that restriction
over time from other developers.

3. Question: were you able to make things work in the end, albeit with more effort? I know we have a
disagreement about whether the Chrome 53 implementation choice is a good one, but we are listening
and want to fix any and all bugs that prevent good animations. I hope us fixing issue 649046 demonstrates
that, and helps your use case.
Thanks for the clarifications. I've updated the article accordingly. 

As for question #3, no, I wasn't able to make things work in an acceptable way. The two big reasons were mentioned in the article. I was able to do the toggling back-and-forth of will-change to get better clarity, but still ran into that pixel-shifting at the end of the tween when we returned to a 2D matrix on background-image stuff, so it didn't feel like a solid solution. Now that you're planning to fix that background-image stuff, it certainly helps but doesn't solve everything. 

The performance tradeoff really bothers me and I worry about how these Chrome-specific tweaks will eventually affect other browsers. Obviously I feel pretty strongly that the will-change spec needs consensus on implementation details, otherwise this is dangerous for developers. It'd be a huge loss if we have to start doing user agent sniffing again. 

I've offered some solutions that I really hope your team re-considers moving forward. Happy to chat more about any of those if you guys are interested. 

Thanks again for engaging in the discussion and working toward positive solutions. Very happy that you guys are listening and eager to fix bugs. 
I think the fix to issue 649046 will resolve the pixel-shifting problem you observed.
If you don't mind, could you try on Chrome Canary tomorrow and verify? It sounds like that
is the only definite blocker.
I miss you, Flash...we had many great times together and your replacement just keeps regressing. Consistency is our holy grail and now, years into the web browser wars, we still don't have parity, and in cases like this we are going backwards.
I have filed issue 652448 to track trying to apply raster behavior similar to
those we have for <img>, for background-image.
Per your request, we tested the latest Chrome Canary. Here's how things compare: https://www.youtube.com/watch?v=PNTvGhXdLvU

Better than Chrome 53, but certainly not "good". Still jittery and there's a noticeable shift when it goes from layerized 3d transform to 2d. 

Unfortunately, with this jittery and blurry-to-sharp delayerization inconsistency, as well as the danger of will-change behaving differently in various browsers (due to lack of implementation detail in the spec), we're kinda stuck and unable to apply a solid patch to GSAP that'd work around all these things. 

Again, even if you fix the background-image rendering bugs, I'd implore you to reconsider the will-change behavior (and arduous workarounds) in favor of stepping back and getting implementation consensus from all the browser vendors and spec authors before re-approaching it later. As it stands now, we're headed toward another age of user agent sniffing and browser-specific hacks that really shouldn't be necessary. 
Thanks. Issue 649046 was just about things being unreasonably blurry in some
cases. It seems like that was fixed. The snapping of pixels when exiting
compositing is yet another issue to do with differences in non-composited and
composited rendering, which is known to be suboptimal in various cases involving
fractional translate or scale. (Issue 521364 tracks improving this, though it may
not improve your use case, which involves scale). I should also note that issue
521364 is a different issue than the will-change raster behavior we changed in
Chrome 53, and has been around forever.

So I'm just a guy trying to make a website with animations that scale smoothly and don't jitter. All this stuff about triggering "will-change: transform;" and "will-change: auto;" I just don't know how to do.

Look... all I want is for my animations to look smooth. I spent day just trying to figure out what the problem was. Every JavaScript library I've tried yields jittery animations when scaling, but CSS is buttery smooth. Are you telling me that the only way I can fix this is to literally go into the source code and make a change that will make my animations perform worse?
Re comment 115: if you have an example where the animation should be smooth
but is not, please file a new bug with a reduced testcase (on jsbin or
codepen or similar) that demonstrates the issue. You're welcome to post the
id of the bug here so I can make sure to see it.
Chrome team (chrishtr?), can you summarize the recent changes to the rendering algorithms and how will-change affects them? It seems like there's a pretty radical difference in the latest release (59.0.x) that resolves some of the old funkiness but also seems pretty CPU-intensive. Did you guys decide to make sure that the baseline raster is sampled at a minimum of scale:1 or something like that? (to avoid all the fuzziness when an element started scaled way down and then scaled up)
We made a number of changes with Chrome 59, but modifications to how
will-change behaves is not one of them. It sounds like you have
examples. Could you file a new bug with known examples so I can
investigate?
Re comment 118: Please see https://greensock.com/forums/topic/16641-scaling-and-will-change-in-chrome/ (I'm not saying it's necessarily a "bug" - I'm trying to gain clarity about what changes were made, why, and what we can expect down the road since Chrome's behavior seems to be changing back and forth quite a bit with rasterization). Thanks!
Re comment 115: As far as this thread seems to reveal, it doesn't appear that this is a bug, but rather intended behaviour which I just don't understand. See https://codepen.io/aaronbeaudoin/pen/YQPjbL for a CodePen of how different methods of animating a scale appear on my computer. The effect is more easier seen on a low resolution monitor. Using "will-change: transform;" on any of the JavaScript library animations causes them to be blurry when they are scaled up.

All I'm trying to do is make the JavaScript library powered scale animations nice and smooth without blur or jitter, just like the CSS one. Do I really have to go and play with source code I don't understand to accomplish this? It seems like such a simple expectation.
Notice the vibrating text in this animation: https://codepen.io/GreenSock/pen/39f1aa3a003e729d7dd4122beeee7c4c/

Looks terrible in Chrome. If I enable will-change, it solves the vibration problem but anything that starts at a very small scale turns into a blurry mess. Help? This seems related to this bug, but perhaps it needs to be filed separately? 

So there are two issues: 
1) the text shouldn't vibrate while scaling
2) The elements shouldn't turn into a blurry mess when will-change: transform is applied. 
Re comment 119: the example gave there does not use will-change transform, so it's re-rastering on every
frame of scale change. I don't think anything changed with Chrome 59, except perhaps that we improved
subpixel jitter during scale.

Re comments 120 and 121: I discussed this at some length with two other engineers this morning.
We agree that it's hard to implement these examples well with a JS-driven animation right now.
I think that the workaround in comment 73 should work, but also agree that the ergonomics of that
solution are not so great. However, we really want to avoid footguns, code complication, and heuristics
in the Chrome compositor.

We came up with another approach that seems to work well, and would like your feedback on: add
a declarative CSS animation with keyframes, to give the browser the ability to reason about the
sizes of the animation and choose a good raster scale.

For the example in https://codepen.io/GreenSock/pen/39f1aa3a003e729d7dd4122beeee7c4c/, add the following
CSS:

@keyframes scale {
    from {
        transform: scale(0.5); /* or whatever minimum scale exists in the animation */
    }
    to {
        transform: scale(1); /* or whatever maximum scale exists in the animation */
    }
}

#quote div{
    animation-name: scale;
    /* declare an animation that will start so far in the future that it never runs */
    animation-duration: 0.1s;
    animation-delay: 100000s;
}

@greensockjs: What do you think? I think it should work well, but would like your feedback.

The above CSS will have the effect that Chrome's compositor can determine the optimal scale for rastering.
Currently, it chooses the maximum scale over all keyframes.

This is the same reason that, in https://codepen.io/aaronbeaudoin/pen/YQPjbL, example #1 ("css animation") has
perfect rendering but the other examples do not.

If this approach works well, we could change the CSS animations spec to allow declaring an animation without having
to specify animation-duration, and say that such a declaration is a hint to the browser that there will be an
animation between the declared keyframes at some point in the future, which may be driven by JS.

Finally, we tested the above approach (with keyframes) on Edge and Firefox, and neither help the quality of the animation.
It appears that they have the same issue as Chrome, but also don't optimize CSS animations well enough yet for 
that to help.
Cc: pdr@chromium.org
I really appreciate the prompt response and the fact that you took the time to discuss it with your team. 

That said, I see some significant problems with the proposed "solution":

  1) In order to adequately render JavaScript animations, one must create CSS keyframe animations...which don't even run? Quite counter-intuitive. JavaScript animations shouldn't depend on CSS animations. 

  2) JS animations are typically very dynamic, but this solution seems to require a declarative approach that assumes it's possible to predict the maximum/minimum scale values. What if the values are randomized or change based on user input? Should the keyframes constantly be updated via JavaScript somehow? Yuck.

  3) This is totally non-standard and browser-specific (unreliable). What happens if everyone starts implementing this "hack" (just to make things work acceptably in Chrome) and then at a later date Chrome decides to interpret things differently and changes the algorithms?

  4) How might this affect WAAPI (which from what I understand taps into the exact same engine under the hood)? For example, if we start dynamically shoving 100000000s-delayed animations onto elements, will the root timeline become absurdly long and confuse people? Like if they create a 1-second long WAAPI animation somewhere on a page that has this hack implemented, and then they query the root timeline's duration, might it break stuff (or at least be confusing) if it's insanely long instead of just 1 second? 

I think it'd be much, much better to implement 2 changes: 

  a) Rasterize at the native size or the current transform, whichever is **BIGGER**. This will solve a lot of the problems since they almost always arise when an element is rasterized *below* scale(1). 

  b) Introduce an official API for telling the browser what the minimum scale is for the rasterization of an element. For example, element.cacheTransform = "scale(1)" would indicate to the browser that when it does its rasterization, it should do so as if the transform was "scale(1)". It could take the same value as any "transform". This gives developers a lot of control and takes the burden off the browser's heuristics. 

Please consider at least "a)" above. 

Thanks again for all your efforts. 
(a) is a good idea, we're trying that now.

Re many of the other comments, I think you missed my point that specifying keyframes
is an idea, which would end up standardized if it works out. I am as aware as you
that the approach is not interoperable at present. At present, interoperability
is weak in this area.
Excellent, glad to hear you're at least trying (a). 

Regarding standardization, I really hope that this idea of requiring keyframe animations won't become standardized as the way developers are supposed to control how things are rasterized (for all the reasons I mentioned above). The main problem is *not* that other browsers haven't adopted it - even if it was universally adopted tomorrow, I think it'd cause many other problems, especially for JavaScript-driven animation libraries that handle things dynamically.

Again, thanks for trying (a). Please let us know how that goes and when it might land in a release. 
Project Member Comment 127 by bugdroid1@chromium.org, Jun 27
The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/e17b1bcf1010c97436fadf6e58e9235b977c74ea

commit e17b1bcf1010c97436fadf6e58e9235b977c74ea
Author: Vladimir Levin <vmpstr@chromium.org>
Date: Tue Jun 27 20:46:10 2017

cc: Make sure to use at least the device scale when will-change: transf.

When we have a will-change: transform hint we keep whatever the current
scale is. Make sure that we use at least device scale to ensure that
content looks crisp when going from low scale to high.

R=enne@chromium.org, chrishtr@chromium.org

Bug: 596382
Cq-Include-Trybots: master.tryserver.blink:linux_trusty_blink_rel
Change-Id: I49780cac9f95eef4cbc324f85a7efddccfb67729
Reviewed-on: https://chromium-review.googlesource.com/546580
Reviewed-by: enne <enne@chromium.org>
Reviewed-by: Chris Harrelson <chrishtr@chromium.org>
Commit-Queue: Vladimir Levin <vmpstr@chromium.org>
Cr-Commit-Position: refs/heads/master@{#482739}
[modify] https://crrev.com/e17b1bcf1010c97436fadf6e58e9235b977c74ea/cc/layers/picture_layer_impl.cc
[modify] https://crrev.com/e17b1bcf1010c97436fadf6e58e9235b977c74ea/cc/layers/picture_layer_impl_unittest.cc

The latest Chrome Canary has the commit in comment 127 in it. The demo
at https://codepen.io/GreenSock/pen/39f1aa3a003e729d7dd4122beeee7c4c/,
modified to use will-change: transform, looks much better.

Sign in to add a comment