Context: In the layerization algorithm, the property tree state of a new pending layer needs to be determined.
Currently we do the most convenient thing by taking the property tree state of the first paint chunk that initiated the layer. This is not ideal because the chunk may be in a "deep" state that will disallow squashing of later chunks. For example:
<div style="overflow: scroll">
<div id=A></div>
<div id=B style="position: absolute"></div>
</div>
> On a low-DPI screen, SPv1 will not composite anything. The current SPv2 algorithm will composite B because its property tree state disagrees with A's. We can avoid this by starting the layer for A with a property tree state that is as shallow as possible (in particular, outside of the scroll transform).
----
Related discussion:
For clips and effects we definitely want the shallowest state, because the chunks that have shallower clip/effect can never squash into a layer that has deeper clip/effect. i.e. you can't un-clip or un-effect. Transform is not subject to this restriction, and sometimes it may be beneficial to use a deeper transform space. For example:
<div style="overflow:scroll">
<div style="will-change:opacity">
<div style="position:absolute; left:100px; top:100px;"></div>
</div>
</div>
SPv1 will squash the abs-pos element into the opacity element's compositing layer, and the layer uses the scrolled space instead of the absolute space. In other words, when the scroller scrolls, the opacity element doesn't need raster invalidation but the abs-pos element will, because of relative movement to its layer's space.
If we use the shallowest space for the layer, it will use the absolute space instead. That implies when scrolled, the opacity element will need raster invalidation but the abs-pos element does not.
IMO it should be generally better to prefer the scrolled space because there are likely more in-flow descendants than out-of-flow descendants. Also for spatial effects such as reflection, using the local space of the effect is strongly preferred.
I did an offline discussion with chrishtr@, our conclusion was just use the shallowest transform node too, as it is simpler to implement. We can switch to a more sophisticated algorithm if it turns out to be serious performance problem.
Comment 1 by trchen@chromium.org
, Sep 13