New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 182 users

Comments by non-members will not trigger notification emails to users who starred this issue.
Status: Fixed
Closed: Dec 2015
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Feature

Blocked on:
issue 264611

issue 567960
issue 568819

Sign in to add a comment
Hook up Web Audio API with WebRTC for audio processing
Project Member Reported by, Apr 3 2012 Back to list
Goal is to come up with a design that allows us to hook up the WebAudio API with WebRTC so that rendered audio in WebRTC can be modified using all the cool stuff in WebAudio.

Initial input from henrika:

In WebRTC, we have an API called VoEExternalMedia (see usage example in Chrome in the link below): 

It allows a user to get callbacks with the decoded PCM-stream taken from NetEQ before it is delivered to the audio layer (which is the AudioDevice owned by the WebRtcAudioDeviceImpl in Chrome).

Today, all WebRTC (VoiceEgnine) related parts are implemented in third_party (third pary) and it might be too "far away for us". Moving more into Chrome, we have this hook: . All WebRTC audio to be played out goes through this callback.

More details can be found in the corresponding header file (see webrtc_audio_device_impl.h). 

I am sure there are several different abstractions one can use here but these are examples of direct hooks that exists today.
Attached a call-flow diagram for WebRTC rendering that we can use as base for our future discussions. All details are not included and it assumes Windows 7.
195 KB View Download
Excellent, thanks! I'll set up a meeting to discuss the design and who does what. 
Related CLs:

Chromium - glue with WebKit and implementation details hooking into AudioDevice/HTMLMediaElement:

Simplify AudioRendererImpl by using AudioDevice

Integrate HTMLMediaElement with Web Audio API's MediaElementAudioSourceNode
Created "Hooks up Web Audio API with WebRTC for audio processing". 

Ensures that the WebRTC AudioDeviceModule now uses a media::AudioRendererSink implementation which can be access by WebAudio. The goal of this patch is to allow the WebAudio API to use modify the WebRTC audio. 
Have LGTM on patch above; will land today.

Proposed next step by Chris: 

"...this seems great as a first step, but we'll ultimately need a way for this RendererAudioSourceProvider to be known/made-available to WebKit (through a MediaStream).   I'll need some help from you guys to figure out how MediaStream and MediaStreamTrack are implemented in WebKit and how they hook into MediaStreamDependencyFactory::CreatePeerConnectionFactory(), etc.
In other words, we need to find what is the equivalent to WebMediaPlayerImpl in the world of MediaStream and MediaTrackStream, and hook into WebKit in a similar manner."

Labels: -Mstone-21 Mstone-22
punting to 22
Labels: -Pri-1 -Mstone-22 Pri-2 Mstone-23
Blocking: chromium:145092
Labels: Feature-WebRTC
Labels: -Mstone-23 Mstone-24
Labels: -Mstone-24 Mstone-25
Comment 13 by, Jan 7 2013
Labels: -Mstone-25 Mstone-26 MovedFrom-25
Punting non-releaseblocking bugs in M25.  You can find the list via MovedFrom-25
Labels: -Mstone-26 Mstone-27
Punting to M27 - we now have the opposite kind of processing (WebAudio -> WebRTC), but don't yet have the right hooks for (WebRTC/PeerConnection -> WebAudio), although we *can* process a stream from getUserMedia()
Project Member Comment 15 by, Mar 10 2013
Labels: -Area-Internals -Feature-Media-Audio -Feature-WebRTC -Mstone-27 Cr-Internals-Media-Audio Cr-Internals-WebRTC Cr-Internals M-27
Comment 16 by, Apr 15 2013
Labels: -M-27 MovedFrom-27 M-28
Blocking: -chromium:145092
Project Member Comment 18 by, May 8 2013
Labels: -M-28 MovedFrom-28
This issue has already been moved once and is lower than Priority 1,therefore removing mstone.
 Issue 241543  has been merged into this issue.
Project Member Comment 20 by, May 24 2013
Labels: -Cr-Internals-WebRTC Cr-Blink-WebRTC
reassigning to xians@ as crogers@ no longer work on this.
Comment 22 by, Jan 16 2014
Any progress on this guys? Pretty important for most uses of WebRTC ...
Can you share more about what you are trying to do?
Comment 24 by, Jan 17 2014
Justin, the last time tried to use this was for creating a VU meter for remote stream audio via the web audio API.  see for details
Comment 25 by, Jan 17 2014
Justin, right - the main use-case is to build an indicator to show activity of remote audio streams. 

This can be done in a simplistic fashion (volume only) using the getStats API, but this involves polling the method, is async delayed and anecdotally uses more CPU...
Comment 26 by, Jan 17 2014
Blockedon: chromium:264611

We are working really hard towards the feature. The reason why this takes long time is that we need to move the APM to chrome first, implement a render mixer to get the unmixed data from WebRtc, then we can hook up the remote audio stream to webaudio.

 Issue 328034  has been merged into this issue.
Any progress on that?
The first part of what was mentioned in comment #26, moving the APM to Chrome, is finished. The rest is work in progress.
Can You estimate when this will be done?
Comment 31 by, Sep 24 2014
Labels: Cr-Blink-WebAudio
Labels: -MovedFrom-25 -Cr-Internals-Media-Audio -MovedFrom-27 -MovedFrom-28
Status: Fixed
WebAudio has been able to hook up local media stream with audio processing from M37, mark the issue as fixed.
@xians - it's hooking up remote streams that's the problem - we need to be able to analyze the volume of remote audio streams for conference calls etc. 

This isn't closeable imo.
Comment 35 by, Nov 6 2014
FWIW, I'm trying to hook up remote streams to AudioPannerNodes so that they can be rendered using 3D positional audio, so local-only support wouldn't be helpful for my application.

(My current workaround is to transfer raw audio packets via peer-to-peer data, and that's obviously only suitable as a stopgap for demo purposes.)
I have a task of recording audio in videochat between agent and client so I need to be able to hook up local AND remote stream so I can record both audio in one file. 
Comment 37 by Deleted ...@, Nov 6 2014
Same here: it's the remote stream we need to visualize. What's the timeline?
Comment 38 by, Nov 6 2014
Same here!  We need the remote audio stream!

Please let us know a timeline??
Comment 39 by, Nov 6 2014

Carter Rabasa
Developer Evangelist for Twilio <>
Instigator of CascadiaJS <>, SeattleJS
<> & Seattle Hacks <>
twitter <> // github
<> // +1 (206) 745-5000
Comment 40 by, Nov 7 2014
#33 misunderstood the meaning of this issue,  the problem is only solved partially. So far, the remote stream can not be processed by WebAudio.
I wish someone had aware this issue, it has been existed for over years.
@xians - would be wise to reopen this and rename title to "...for REMOTE audio processing"
There is one already at the issues ... but it's called duplicate. I don't know why, because I haven't found the "original" one yet.
Status: Available
OK, I was not aware that this bug was mainly for remote stream.

Assign it to Tommi, he might be able to provide some info.
Is there any timeline decided for supporting remotestreams in web audio api?
Please look at resolving it. There are many applications of this, which does not have a cleaner workaround at the moment.
Comment 46 by, Dec 5 2014
At the moment we are swamped with other high priority items to work on. There isn't a decided timeline and I don't expect progress on this anytime soon.
That is not good info for my clients. Biggest problem is that there is no workaround what so ever (is there?) so only solution is to use Firefox which has this working for quite some time now starting from recording audio from remote stream and recording video/audio (both local and remote stream) in single webm container (not separately).

@tommi: Any news on this? Like a VERY rough guesstimated timeline? It's really a rather big limitation for Webaudio not being able to process remote streams. We've been hoping for this feature since ... uhhh ... April 2012. :-)
Waiting for this feature 3 years...
Comment 50 by, Jan 26 2015
ditto.  i bet there's lots of us out there and just not many speaking up.  i've also been waiting years for this.  this is a very important feature that is missing.
This is such a fundamental functionality, I can't believe nothing is
happening here :(
Firefox has MediaRecorder and Web Audio API for WebRTC implementation for quite some much longer does it take for Chrome to do the same... as I understand Firefox and Chrome are partners in promoting and pushing WebRTC technology which just started to have some attention from clients that wants to have videochat support in contact(communication) centers...
I really want to mute some guys in a conference who have loud noise, but I don't know which one because chrome don't know. 
Comment 54 by, Jan 26 2015
Controlling individual volume and mute state of remote tracks, is possible in Chrome.  You can get the audio energy levels per track via PeerConnection.getStats() and use it to detect which one is the loudest/weakest etc.
getStats() isn't really suitable for a proper VU meter, but more importantly without this there is no recording of the remote stream respectively the full RTC session.

But there are quite a few use cases for WebAudio here. I.e. without the ability to route an incoming stream, things like webradio or podcast applications are difficult - really anything where an incoming stream must be sent anywhere other than the speakers. Panning would be useful for conferences. Pretty sure there's more. :-)

This has become a critical blocker for many realtime features currently. If not early, an rough estimated timeline would help us plan things instead of awaiting a response every few weeks.

We are reviewing the situation here and will provide more details next week.
Based on what I can see getStats does not provide the audio level for the remote stream. Unless I am missing something, Chrome 40 only provides the audioInputLevel. Is there any way to implement a volume meter for the played audio?
You can use audioOutputLevel for remote stream.
You can use this but you have to poll the remote pc several times a second and getStats is a real resource hog.
We reviewed this today and this is probably about a quarter worth of work; it requires a significant amount of refactoring in Chrome. The team responsible for this area is currently focused on several important improvements to the stack on mobile, but we will take another look at this at the end of Q1.
Comment 62 Deleted
Just wanted to offset some of the whining jerks in this bug by cheering you on with some encouragement. I hit a use case today that led me to this bug -- getStats() helps, but I'm sure you're all very aware of the potential for WebRTC + Web Audio working together in full.

Modern web browsers are extraordinary achievements of architecture and integration of technology from so many different domains. "Sufficiently advanced technology." I can't fault you for having higher priority magic to work on. :-)
Comment 65 by Deleted ...@, Apr 3 2015
It's now the end of Q1 — any positive momentum on this issue? :)
Comment 66 by Deleted ...@, Apr 3 2015
Yeah, that would be awesome!
The situation is still as explained in comment #61.

A bit of good news though, in reply to comment #60, getStats has been improved significantly in version 43, so it's both faster and takes less cpu.  Hope that will help a little bit at least.
Nice! Does it provide the 'standard' result object? Last time i checked i had to implement separate solutions for each type of browsers.
One quarter of refactoring, this issue not being taken in much consideration for now, I would say this won't be fixed until 2016,right? A rough time line would help a lot so us developers could set our priorities straight. 

There's one use case that hasn't been discussed here. On mobile you just can't things, you've got to use the Web Audio API. How the heck do you establish an audio-only WebRTC call there if you can't play the received stream using the Web Audio API?
Comment 71 by, May 20 2015
playing the received audio stream via an audio tag has been supported for a long time.
Any update on the current priority for this. When might it be be escalated to Pri-1?

Just got hit by this issue too... Any status update? Very hard to work around the problem...
oh my, please give us any status update or a simple work around?
Comment 75 by Deleted ...@, Sep 15 2015
I am shocked Google (which normally leads the charge) is not able to allocate the resources necessary to attack this and other WebAudio issues. I can personally can wait, I just hope that this capability is *ever* brought to Chrome (which is my applications only target run-time). Being able to control local and remote streams within all facets of the WebAudio API's capabilities is absolutely critical for me.

Pony up the cash Google or should I say "G" - hire more Audio developers. And give the existing ones a raise. Please forward this sentiment to the appropriate concerned parties (aka @ *FO's)... :)~ XOXO
Something is cooking and I hope this will address our issues:
re  issue 262211  - the MediaRecorder API is something entirely different (recording MediaStreams into encoded/compressed blobs).
re #75: Even Google has finite resources. We are currently focusing on MediaStreamRecorder, and will reconsider this issue once the MSR work is further along.
Comment 79 by, Sep 17 2015
it must be the recession x)
This is not there priority
Like I said in #78; we have finite resources, so we have prioritized other issues higher at this time. This is still something we plan to work on.
Comment 82 by, Oct 16 2015
At the moment preparation work in the VoiceEngine is underway and needs to be done before we can make further progress on this feature.
Do you already have a time horizon?
Are there any plans to move forward on this in the near future?
Re# 84, checkout the last few comments and the youtube video mentioned in #80, for your convenience there is a link to where it's presented:
Labels: rtc-fixit
Labels: size-large
Project Member Comment 89 by, Dec 9 2015
The following revision refers to this bug:

commit 434aca8d862a46d0c3b71698a264d0c71d898170
Author: tommi <>
Date: Wed Dec 09 17:41:54 2015

Add empty placeholder files for remote audio tracks.
This is needed for Chromium so that we can roll, update libjingle.gyp and then continue.

BUG= chromium:121673 

Review URL:

Cr-Commit-Position: refs/heads/master@{#10955}


Comment 90 by, Dec 10 2015
Status: Started
Comment 91 by, Dec 10 2015
Blocking: chromium:567960
Project Member Comment 93 by, Dec 10 2015
The following revision refers to this bug:

commit 75287dc569fa28f23788f6324cd120d370932777
Author: tommi <>
Date: Thu Dec 10 14:04:08 2015

Handle the case when a mediastream track doesn't have extra data.
This can happen if a track was cloned.

BUG= 121673 

Review URL:

Cr-Commit-Position: refs/heads/master@{#364347}


Blocking: chromium:568819
Project Member Comment 95 by, Dec 12 2015
The following revision refers to this bug:

commit f888bb58da04c5095759b5ec7ce2e1fa2cd414fd
Author: Tommi <>
Date: Sat Dec 12 00:37:01 2015

Support for unmixed remote audio into tracks.

BUG= chromium:121673

Review URL: .

Cr-Commit-Position: refs/heads/master@{#10995}


Project Member Comment 96 by, Dec 12 2015
The following revision refers to this bug:

commit 30b55651cf9500b1ed8917e4d040513559ad3e83
Author: tommi <>
Date: Sat Dec 12 03:43:50 2015

Add support for unmixed audio from remote WebRTC remote tracks.
To begin with WebAudio will have support for this with more areas to come (e.g. MediaStreamRecording)

BUG= 121673 

Review URL:

Cr-Commit-Position: refs/heads/master@{#364885}


Comment 97 by, Dec 12 2015
Status: Fixed
WebRTC changes rolled into Chromium at rev 364916.
OMG!... Is this really fixed? Nice. Bravo!!! Is it already available on Chrome (canary)?
Wow,amazing guys.. thanks! 
Comment 100 by, Dec 12 2015
A Christmas present!!!
Thanks :)

There are still a few rough edges that's good to be aware of:

* Audio is still mixed for regular mediaplayers (<audio>/<video>), I couldn't break that functionality at this point since so many apps depend on it.

* Cloning of remote audio tracks needs to be fixed (next thing for this feature).

* There needs to be some non-WebAudio rendering target active for at least one remote audio track.  E.g. echo cancelling enabled on a local getUserMedia audio track (the echo canceller needs the remote audio to know what to cancel) or an <audio>/<video> tag. The reason for this is that the architecture in webrtc only supports a single target for pulling audio and support for WebAudio is implemented basically as a tap in a particular place of that flow.

WebAudio works now though and things are wired up for other APIs to work as well.  We're moving towards not mixing inside of WebRTC but rather inside of Chrome, so this is a step in that direction.

I expect that the first Canary build available with the new functionality to come out tomorrow or later today.
Filed  issue 569369  for supporting cloning of remote tracks.
Available in today's Canary: 49.0.2592.0
Labels: M-49
Nice to see webRTC / webAudio integration is moving forward!
Is there any example or test code available somewhere to show what can now be achieved and how? (this would also help clarifying the guidelines given in #101)
I am trying to use webAudio to mix several remote audio streams with positional information (AudioPannerNode) (like poster in #35). I am not sure if this bug fix is supposed to bring this capability or not (but for me simple code like calling createMediaStreamSource on a remote stream for playing it in webAudio still does not work in Chrome Beta/Canary).
Here's a simple example that uses WebAudio for visualisation of a remote audio track:

What you want to achieve should be possible.  The cloning issue has been fixed and cloning of remote tracks is possible in 49.  A media player (audio/video tag) is still required to drive the audio from WebRTC into WebAudio.
Thanks a lot for your feedback.
(adding a muted audio tag has fixed my code)
Comment 110 by, Apr 4 2016
When will this be available in mainstream chrome? Really excited this is fixed and it'd be great to know when I can tell my users they can use positional voice chat in chrome. :)
#110 - the functionality described in the original bug report should be available in the current stable channel build. If you see any problems, think the implementation lacks some functionality, etc, please file a new bug. Thanks!
Labels: -Size-large size-large
Now it works! Thanks.
It seems that it still does not work with remote streams.
I use the sample, pass the stream through web audio api instead of the audio tag and it works fine on firefox51, but not on chrome56.
Hmm... I don't think that there has been a regression.  Can you share your modified example so that I can try it out?
Comment 115 by, Mar 3 2017 is the demo that shows a remote track being forwarded into WebAudio.

It seems to work on Chrome 56.

On Chome Version 63.0.3239.84 (Offizieller Build) (64-Bit) on OS X

This demo:

shows a nice visualisation but no audio is to hear.
i had the same problem with a little project i tried to implement, with the same result.

no error, everything seems to work perfect but no audio to hear. if i switch from audiocontext to HTMLElement <audio> everything works fine.

same code on safari 11 ( iOS and OSx ) works without any problems.

it seems to me that this issue is not completely fixed!
Hi Michael,

This issue doesn't change how rendering in Web Audio happens. It makes sure that audio gets delivered from webrtc to web audio.  I don't remember if audio is supposed to be rendered, but if Safari and Chrome don't work the same way, then it sounds like there's a bug in either browser. Can you tell from the code if audio is supposed to be rendered out?
From a brief code inspection:
This demo is supposed to visualize the sound from your microphone instead of playing it out.
If you hear a sound from the PC while you're running the demo, something's wrong.

Hope that helps.

Hello hta@chromoum,

good to know but anyhow, i have my own example but with a browser switch for Chrome to use AudioHtmlElement as Fallback so it is hard to show you any example, but fact is

If i attach a MediaStream to a AudiContext.

this.audioCtx= new AudioContext();
var source= this.audioCtx.createMediaStreamSource(mediaStream);

after signaling and ICE/SDP negociations, i hear no sound on Chrome.
in safari ( iOS, MacOS) & firefox it works without any problems.

i also tried to play an local Audiofile before i connect the mediaStream to test if the audiocontext is maybe routed false, but also this file is played without any problems.
Next Test was to use a localStream from getUserMedia(), also without any problems.

As i start to test from a remote stream no sound to hear!

Sign in to add a comment