New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 616103 link

Starred by 1 user

Issue metadata

Status: Fixed
Owner:
Last visit > 30 days ago
Closed: Jun 2016
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug



Sign in to add a comment

Quantify perf test variance with adaptive encode behavior

Project Member Reported by isheriff@chromium.org, May 31 2016

Issue description

https://codereview.chromium.org/1908203002/ added the following changes:

- Use target bitrate feedback from webrtc
- Expand QP range to do top-off on VP8
- Frame rate adaptation

Quantify the impact of all these on latency tests.

It appears there are at least following things happening:

- Jitter delay increases signficiantly (1 second) with adaptive capture on remoting.
- It is also unclear how well the perf tests measure latency for lower QP range
- The older behavior of just blasting encoded frames may also have given false numbers on perf tests in simulated network scenario
 
Multiple things going on here:

- The old code did not do any target bitrate estimation, so it was actually sending a t a higher rate (and the pacer inside webrtc paces at 5 times the BW) and since this is simulation, it was actually going faster inspite of the BW report being less

- The adaptive capture leads to higher jitter prediction at receiver. So jitter removal at receiver helps with the sender controlled receiver behavior helps with that.

- An area of improvement here is frame rate adaptation at capture. We do a naive per-frame timing adjustment. This causes frame rate to be all over the place. It should be based on a window of data and needs smoothing.


More understanding of the delays. The jitter behavior is so different from when originally we were blasting frames without looking at the target bitrate VS now where we restrict sending at target bitrate on encoder and do top-off that it is worth to ignore jitter in the calculations (given that we will turn it off from sender in short time)

Currently, the following are the delays I see:

1. The naive frame scheduling adds some delay on average from the time of last send frame (5-8 ms). It is worth fixing this with a mechanism that smooths out frame rate switches and handles burst of traffic better. Preparing a CL for this.

2. The delay from encode to being received is about 7 - 8 ms even with a pipe that is supposed to not add any delay

3. Even with render timestamp being at time of receive, I see that the actual reception of frames at chromoting is about 10-15 ms after receive at webrtc. This needs further investigation.

4. A big contributor of delay on frames is that it polls for change on screen. This adds a delay anywhere from 0 to 33 ms (when operating at 30 FPS polling)

5. Right now, a cursor frame or a full frame are generated at 250 ms intervals. Once a full frame is generated, it currently takes > 300 ms to receive with BWE operating at around 2.5 Mbps on an unlimited link scenario. This adds delay on the first cursor frame right after a full frame and adds up the average small frame latency.

Overall, it is at about 46 ms once we fix the frame rate adaptation on small frames and about 350 ms on large frames in the case of unlimited capacity, no latency link.



Status: Fixed (was: Assigned)
Planning a change to use a leaky bucket mechanism for (1), but the bug itself was for understanding the latency numbers.

Sign in to add a comment