See the discussion on loading-dev@:
https://groups.google.com/a/chromium.org/forum/#!topic/loading-dev/pVuMhUZg-qo
The key here is that css rules have very poor utilization rate, so we could speed things up significantly.
Keeping memory use in check + keeping complexity low will be tricky but imo pretty tractable.
Quick update from the first day of the experiment:
We are seeing a 30-35% improvement of parsing author style sheets (at basically all %iles), and update_style is not showing much regression (too much noise). Still, error bars are big.
I expect when [1] lands, update_style will have even less overhead. timloh@ what is the status of the streaming parsing work? I expect that to make the lazy parsing code simpler and more efficient.
[1] https://codereview.chromium.org/2474483002/
I'm actively working on the streaming parser (661854) and hope it'll be done in 2-3 weeks (i.e. after the 56 branch). That might be a bit optimistic though, re-doing the inspector integration might end up harder than I expect.
Sounds great thanks for the update. Let me know if there's any way I can help out. My understanding is that when the streaming parser is finished the lazy work will have a much better win:overhead ratio (no expensive copying of tokens, etc.)
Quick update for metrics for rule usage %, from 1 day of canary data:
Rule usage % PDF CDF
[0%, 10%] 36 36
(10%, 25%] 43 79
(25%, 50%] 12 91
(50%, 75%] 4 95
(75%, 90%] 1 97
(90%, 100%] 0 97
100% 3 100
This is very good news, as ~80% of CSS files we're parsing use <= 25% of the rules we lazily defer.
Do you mean 5MB? This is being tracked in issue 692932 and I have a WIP CL in flight which is starting to address this. Let me block this bug on 692932 and move discussion to that bug.
Calling it fixed. Overall this yielded a 1% improvement on TTFCP on Android, with a 30% improvement in overall CSS parsing. We didn't see any significant memory regression.
Comment 1 by suzyh@chromium.org
, Aug 31 2016