New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 3390 link

Starred by 10 users

Issue metadata

Status: Started
Owner:
Last visit 22 days ago
Cc:
Components:
NextAction: ----
OS: ----
Pri: 2
Type: Bug


Show other hotlists

Hotlists containing this issue:
Hotlist-1


Sign in to add a comment

Add RTP timestamp and capture NTP time support for multiple audio channels case

Reported by wu@webrtc.org, May 22 2014

Issue description

This is currently not possible because chrome only gets the mixed audio.

The first implementation ( issue 3111 ) only supports one channel case. 

This is to track the work to add multiple audio channels support after mixing is moved to chrome.


 

Comment 1 by wu@webrtc.org, May 30 2014

Summary: Add RTP timestamp and capture NTP time support for multiple audio channels case (was: Add capture NTP time support for multiple audio channels case)
Project Member

Comment 2 by juberti@webrtc.org, Jun 7 2014

Labels: Timestamps

Comment 3 by wu@webrtc.org, Jul 25 2014

Owner: xians@webrtc.org
Assign to Shijing for now since he will be working on moving the mixer to chromium.
Project Member

Comment 4 by tnakamura@webrtc.org, Nov 4 2015

Cc: -xians@webrtc.org henrik.lundin@webrtc.org tina.legrand@webrtc.org
Owner: ----
This bug hasn't been modified for more than a year. Is this still a valid open issue?
Project Member

Comment 5 by henrik.lundin@webrtc.org, Nov 5 2015

Cc: minyue@webrtc.org
I don't know enough about this. Is it related to what +minyue worked on related to NTP and audio? Probably not...
Project Member

Comment 6 by juberti@webrtc.org, Nov 5 2015

Cc: tommi@webrtc.org
Labels: -Area-Internals Area-Audio
Yes, this is making sure we can get e2e delay working for individual audio channels. Requires the mixer rework to finish.
Project Member

Comment 7 by minyue@webrtc.org, Nov 5 2015

Owner: minyue@webrtc.org
I see. It is good to assign it someone in our team. I can be it.
Project Member

Comment 8 by tina.legrand@webrtc.org, Nov 10 2017

Labels: -Pri-2 Pri-3
Project Member

Comment 9 by ale...@webrtc.org, Dec 22 2017

I was going through my TODOs and found a reference to this in the Audio Mixer. I don't understand how this would be implemented. The mixer receives a bunch of streams, each of them may have the RTP or NTP timestamp set. After it's done mixing, only one timestamp comes out. What should that be? Should it be one of the incoming timestamps or should the mixer have its own counter?
Project Member

Comment 10 by ossu@webrtc.org, Apr 12 2018

Cc: ossu@webrtc.org
I stumbled upon this yesterday while bug hunting in Chrome. The current behavior is very strange, if not broken:
- Whoever calls AudioMixer::Mix passes in a frame to mix into.
- With zero streams, the elapsed_time_ms field is set to -1, timestamp isn't updated.
- With one stream, both fields are copied from that stream.
- With more than one stream, none of the fields are updated. So if you go from 1 to 2 (or more) streams, the time will just stop.

Chrome uses elapsed_time_ms as the position of the stream. I was investigating whether that could become something very wrong and stop playback from working. I don't think that's the case, but I think this ought to be addressed.
The only options I can see working in all cases are:
- Have elapsed_time_ms always be -1 coming from the mixer.
- Have the mixer use its own running count and return that.

Any thoughts?

Project Member

Comment 11 by henrik.lundin@webrtc.org, Apr 12 2018

Labels: -Pri-3 Pri-2
Minyue, would you be able to dig into this a bit and see what's going on? Do we have a problem? What are possible solutions?

Comment 12 by aleloi@google.com, Apr 12 2018

I think I copied the old behavior when I wrote the timestamp handling. I remember that we have some unit tests that expect the timestamps to be such-and-such.
Project Member

Comment 13 by minyue@webrtc.org, Apr 13 2018

Status: Started (was: Assigned)
Project Member

Comment 14 by minyue@webrtc.org, Apr 24 2018

Although quite much refactoring has been made recently, the status seems the same as it was when I addressed one bug in this area. Basically, I corrected the way that webrtc stats "capture_start_ntp_time" (the join time of a remote participant, measured in local wall time) is calculated for all audio receiver channels.

The end-to-end delay between local and a remote participant should then be
e2e_delay = now - (captured_start_ntp_time + elapse time)

While capture_start_ntp_time was fixed, the elapse time is a difficult thing. The equation requires a different elapse time for each receiver channel, but after mixing,  only one elapse time is allowed.

Some structural changes can be needed, but no decision has been made since then.
Project Member

Comment 15 by minyue@webrtc.org, Apr 24 2018

I think a solution would be to let WebRTC provide e2e_delay (instead of captured_start_ntp_time), on each receiver channel. This should be easy to implement. But, this will not count for extra delay caused by mixer, if any.
Project Member

Comment 16 by minyue@webrtc.org, May 11 2018

This can become even more complicated in multiparty calls, since media relayer may filter out streams of some low-volume participants.

I would suggest to "won't fix" this issue for now. In case of need for e2e in multiparty calls, we may start a new thread.

WDYT?

Sign in to add a comment