Issue metadata
Sign in to add a comment
|
37.3% regression in browser_tests at 601373:601383 |
||||||||||||||||||||
Issue descriptionSee the link to graphs below.
,
Oct 22
https://chromium.googlesource.com/chromium/src/+log/61ee8f21fe8b46464e50f7afd0beadf21f3d8259..5c2e137dc5294b30492a05a96a33bf0c7ff12908 sadrul@ your CL seems most related in the range. Can you check and tell if display latency difference is expected?
,
Oct 22
/cc+ miu@ as a webrtc owner. The metric is 'Total Latency Max', which makes me think a lower number is an improvement, rather than a regression. Would that be a correct assessment? If not, what is this measuring, exactly? These are not regular telemetry tests, so it's not super clear how to go through the numbers to understand the metrics.
,
Oct 23
BTW--I'm not a WebRTC owner. I own the screen capture stuff for the Cast use cases. :) Looking at the graph, I can't tell if this is a false alarm or not: The graph doesn't specify whether "lower is better" or "higher is better." The test owner should fix that as a precondition to resolving this bug. Speculatively, if this is measuring some kind of latency, and latency went down 37%, isn't that a *good* thing? Though, if "total latency" means "add up the per frame latency over all frames," then this is a regression since it infers the frame rate was reduced.
,
Oct 23
[+qiangchen@chromium.org, mbonadei@chromium.org, phoglund@chromium.org] cc'ing webrtc perf owners, per https://docs.google.com/spreadsheets/d/1xaAo0_SU3iDfGdqDJZX_jRV0QtkufwHUKH3kQKF3YQs/view
,
Oct 23
I can explain the tests. > The metric is 'Total Latency Max', which makes me think a lower number is an improvement, rather than a regression. Would that be a correct assessment? These tests measure various stages of display latency, from decoded frame to the screen. 'Total Latency Max' here is the time from RemoteVideoSourceDelegate::RenderFrame(WebRTC passes a decoded frame for render) to Display::DrawAndSwap. Lowering this is definitely an improvement. It seems like the most significant drop comes in the time between "VideoResourceUpdater::ObtainFrameResources" and "Display::DrawAndSwap" calls, avg ~10ms to ~1ms. I wanted to check that this is expected with your change, unless it is triggered by some other change in these tests. https://chromeperf.appspot.com/group_report?sid=e1fe895d0b51912c419aa9bffc1aab5b55019034098bbdfe87910ba7f2956795
,
Oct 24
What the change does is potentially reduce the number of begin-frame messages sent to the clients. So if the gpu is busy processing each frame, then there will be fewer frames that are queued up. I can see how that could reduce the latency, but it is possible that it also drops the number of frames produced. So I am curious to know if there are other relevant metrics that could be affected. Perhaps 'Skipped Frames' measures this [1]? [1] https://chromeperf.appspot.com/report?sid=3944efa2b9d32335ec7fd2174e883966fe91a5aaf1e0a3cd9f8586f5c80b15c4&start_rev=597460&end_rev=602260
,
Oct 24
> Perhaps 'Skipped Frames' measures this [1]? Yes it does. There is a slight increase but it somehow didn't trigger an alert. If all is working as expected and this tradeoff makes sense, feel free to mark this as fixed.
,
Nov 20
Closing as per comment #8. |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by 42576172...@developer.gserviceaccount.com
, Oct 22