Bandwidth utilization around FEC enable/disable is inaccurate
Project Member Reported by email@example.com, Mar 13 2014
See attached screenshot... which is a webrtc client producing a 1mbps stream and that everytime it enables FEC send bitrate goes as high as 1.7mbps.
Mar 13 2014,
Oct 14 2014,
Nov 4 2015,
This bug hasn't been modified for more than a year. Is this still a valid open issue?
Nov 5 2015,
It hasn't been fixed, but I think it's by design, although it can probably be improved. We may want to look at improving allocation of bitrates when we look at doing FEC improvements.
Dec 10 2015,
Peter, Since you recently looked into FEC bitrates, are there any next actions on this?
Dec 17 2015,
I think we landed on b=AS being global maximum (which is correctly enforced through Call now), and that FEC is allowed to add to the total bitrate. I don't think there's anything left actionable (unless the FEC rate is too aggressive, but if that's the case that sounds like a separate bug).
Dec 17 2015,
Ok, think I understand the bug, the encoder isn't responding quick enough to the rate change here I guess. Not feasible (?), but the pacer should help prevent overshooting rates by smearing out the spike.
Dec 17 2015,
Yes, this isn't related to what you wrote in #6. The problem is that we compensate for FEC bitrate by reducing the target rate of the encoder, but this happens after the fact, and it takes us 1 second to measure what the actual FEC bitrate ended up being. NACK compensation works the same way. We can improve this by instead allocating bitrate for FEC in the bitrate allocator and not compensate after the fact. The pacer will help in reducing the number of packets in the network, but we will pay with added send-side delay.
Dec 17 2015,
Dec 18 2015,
Spending additional bits on FEC also makes sense since it's a more effective padding scheme than retransmitting packets (or dummy padding). Maybe we shouldn't only do it when it's packet loss, but so long as encoded rate < target transmit rate.
Dec 21 2015,
Agree, but this bug is around avoiding the overshoot the first second when we are BW limited and encounter packet loss. Right? The question, as Stefan wrote, is if this should be handled by the bitrate allocator or per stream.
Dec 28 2015,
I think filling with FEC after the fact makes sense regardless, since the underlying VideoEncoder implementation might not respond as fast as we want it to. I think that's what's happening now unless the underlying code was very different when this was reported. We do set new encoder parameters on the next encoded video frame through ::SetChannelParameters in VideoSender when these rates change in MediaOptimization. Would moving this setting to the bitrate allocator change anything? We're not waiting for a timer to pick this up, so I think we can't rely on that setting half the target rate to instantly-ish cut the bitrate in half. Am I missing something?
Jan 11 2016,
I think you are missing the point. The way our FEC works is that it produces X% redundancy based on the protection factor computed by our media optimization. It will always try to create this amount of FEC. The rtp module then measures the FEC and NACK bitrates sent the last second, and feeds back those rates to the media optimization, which subtracts those rates when computing the encoder target bitrate: video_target_bitrate_ = target_bitrate * (1.0 - protection_overhead_rate); The problem with this is that there will be a lag of 1 second in protection_overhead_rate, which causes us to overshoot for a second for instance when the FEC is enabled due to packet loss. This could be improved either by allocating a bitrate for FEC instead of a fraction. Or we could subtract the FEC bitrate that we expect to be generating (based on the fraction).
Mar 1 2016,
Over to Stefan to re-triage and decide next step.
May 18 2016,
Issue 5897 has been merged into this issue.
Jul 20 2016,
Aug 4 2017,
[Bulk edit] This issue was created more than a year ago and hasn't been modified the last six months -> archiving. If this is still a valid issue that should be open, please reopen again.
Sign in to add a comment