New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 2697 users

Comments by non-members will not trigger notification emails to users who starred this issue.
Status: Verified
Email to this user bounced
Closed: Nov 2012
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Feature

Sign in to add a comment
getUserMedia() doesn't work for microphone input
Project Member Reported by, Feb 1 2012 Back to list
getUserMedia('video, audio', ...) and getUserMedia('audio', ...) do not work for audio access.

navigator.webkitGetUserMedia('video, audio', function(s) {
  audioElement.src = window.webkitURL.createObjectURL(s);
  video.src = audioElement.src;
}, function(e) {

Does not play the mic's input through the audio tag.

Try attached test case. The video is passed through to the video tag, but the audio is not.
502 bytes View Download
Labels: -Feature-Media
 Issue 113161  has been merged into this issue.
Comment 3 by, Feb 10 2012
using just 'audio' doesn't work either: webkitGetUserMedia('audio',...)
Comment 5 by, Feb 17 2012
Status: Assigned
Eric do you have the WebRTC flag enabled? 

Niklas, I recall also a similar request in chromium-dev about this, can you triage after we get feedback?

Assigning to grunell that can comment on this. I don't think local audio loopback through an audio tag is supported (yet), only to connect to a PeerConnection for remote streaming. Getting a local video preview has much higher priority to get implemented vs getting an audio loopback. Suggesting P3 for this.
FWIW, I'm getting a lot of developer requests to get mic input and pipe it through the web audio api.

Not sure of the internals, but theoretically the rest of the pieces are already there.

Can we keep this at a p2? It's already got 16 stars :\
Correct, this is not supported yet.
Is there an ETA for piping the mic input through the web audio api? Is this feature in development or is planned for future releases?
I love to play with this feature, please guy :)
I imagine live microphone capture to be re-injected in audio context. This can be a full interactive experience. (working draft)
Yep, can't wait for the feature.
So I can stream it to the server and than stream to the hundreds of connected clients,
live radio in a browser. :)
Eagerly waiting for this!
Comment 15 by Deleted ...@, Apr 23 2012
+1 can't wait for this to be implemented
Is there any news about it?
no news yet. I'll update the bug once we get close.
Another vote for seeing this sooner than later. It's unfortunate to have an API that advertises "audio" not provide audio after this much time. Looking forward to a milestone update.
Would love to hear the use case of using audio without peerconnection. Since no recording API is yet defined, what is the use case you are thinking of for audio?
Comment 21 by, May 13 2012
A few off the top of my head-
- live performance effects (e.g. guitar amp, vocoder)
- live sound analysis (guitar tuner, graphic equalizer)

Additional use cases I can think of:
- games using audio input (think SingStar, etc.)
- audio recording / note-taking application
OK, so it's more of a pipe the local audio into Web Audio API kind of use cases you are thinking about. Not simple mic access.

We are working on this, but  as I have said in the past, our main focus is people talking to each other via browsers.

That being said, local device access mashed up with web technologies open up amazing possibilities... I'll update the bug once there is something to try out, but I do not expect much to happen in the next few weeks. More likely late summer.
At Spreaker ( we would like to replace our Flash-based online deejay console with an HTML5 implementation. We've developed a prototype of an HTML5 deejay console and we've been able to compile an MP3 encoder into native client so that we can broadcast mixed audio with WebSocket.

PeerConnection will be a great feature but our encoder+websocket currently works quite good (at least for the scope of the prototype). The real missing feature is the ability to "inject" the microphone into our audio pipeline/graph (we've tried with a Flash-based bridge but there're big performance issues that seem to be caused by external interface).

Just my cent,
... Karaoké, speech to text, voice command... And a lot of things.... I don't understand the dificulty you have to expose sound input in a browser ??
As with most things, it is not a question of difficulty, but priority and ressources for our small team.

We'll get there soon and it will rock (pun intented) when we get there, promise :)

Would a (small) cash bounty on this bug have any effect on prioritization or alleviate resource pains?
:-) A ton of testing and feedback when things start landing will be even better than cash... thanks for the offer though, made me smile even though I am at work on a sunday :)
Well, consider me an alpha tester then, and thanks so much for your work!
We offer our testing support too. We could also offer a donation, but I've heard you prefer testers ;)
If I want to know how do this work and try a little implementation. What documentation recommend to do it?
Use cases: client-side FFT analysis, speech recognition
Chromium is open source -- if someone wants to jump into WebKit development and follow the stream recording API, I'd imagine the WebKit upstream will let it through.
For more use-cases, there's a bunch of questions on stackoverflow about it:

A few years ago the Adobe Flash community got together and pressured Adobe to add mic recording capability to Flash with the getmicrophone project:!/getMicrophone , which led Adobe to add the mic recording capability in FP 10.1.

Also, Silverlight supports recording from the mic:

Clearly there's a demand for this functionality.  If it's not added to the browser, developers will continue to be dependent on plugins.

I think it's clear that recording is important for a lot of these audio use cases. Unlike video, where a feedback loop to the <video> element is useful for preview, audio is arguably much less likely to be fed directly back to an <audio> element (except in the cases of things like Web Audio API visualizations/real-time effects). In many cases, we want to do something programatically with saved audio input.

That said, it would be nice to see this worked on in parallel with  Issue 113676  ( - Implement MediaStreamRecorder. 
Comment 36 by, May 14 2012
Although I agree that recording/encoding audio would be a nice addition, I think that's far secondary to audio input access - I would not want to gate audio input support on media stream recording.  If I really needed to, I could hack around the lack of recording via a JavaScriptAudioNode; I can't make up for the lack of audio input access, without a plugin.  And a number of these cases really don't need recording, anyway.
Our team at SoundCloud would love to port our recording feature from Flash to native browser APIs:
The existing issues prevent us even from having basic microphone recording tests. 
I totally agree with the soundcloud developer: the existing issues prevent even from having basic microphone recording tests.

Marco Pracucci (Spreaker)

Inviato da iPhone

Il giorno 16/mag/2012, alle ore 11:34, ha scritto:
I am working on application for content authoring with voice notes to be recorded to the slides.
Can you please explain the workaround using JavaScriptAudioNode.

INPUT: You could use a flash bridge, something like We tried something similar, but we had performance issues mainly caused by the flash-javascript communication via externalinterface. But, again, you're getting microphone audio from Flash, so common sense suggests to do everything with Flash. Obviously we're here cause we would like to do it from javascript, but it seems it's not a priority for browser developers ;)

OUTPUT: In theory, you can attach a JavaScriptAudioNode as last node of your graph and get all "output" samples. In practice, it's up to you what you do with these samples. You could, for example, encode it and send it to server via WebSocket. Obviously the hard part is encode it, since you don't have anything native yet. We did some experiments with Google Chrome Native Client: it works well, but unfortunatly it's disabled unless your application has been installed from Chrome Store (all other applications requires to enable "native client" flag on about:flags).

Is there anyone that tried other approches or tricks?


The only trick that I know of on the input side is to keep this thread hot :).

On the output side, I did some experiments with using generic compression algorithms like LZJB as a substitute for a codec.  But they work so poorly I wouldn't recommend the approach.  I think a better idea is to breath new life into the MediaStreamRecorder interface as a means of getting the browser to perform the encoding for us.
Would be also interested in a solution to this issue. Particularly an integration with the Web audio API. I think this can solve most of the issues mentioned here. As far as I understood it already exists in the standard (, but has not yet integrated into Chromium. At least I could not get it working together with a AnalyserNode.
Comment 43 by Deleted ...@, Jun 4 2012
Do we have a status update yet on this audio issue?


Comment 44 by Deleted ...@, Jun 13 2012
Still not available?
Comment 45 by, Jun 25 2012
I am very keen to see the ability to grab audio input and manipulate in the browser through Web Audio API. Recording the stream would be nice too. But I understand that that is not really what this thread is about.
Comment 46 by Deleted ...@, Jun 25 2012
I would just like to add that Marco presents a significant use case.  GetUserMedia through PeerConnection covers the P2P case, but what about the broadcast use case where you need more extensive server support to disseminate the audio feed.  One extreme hack is to incorporate the WebRTC implementation on a custom server to emulate/spoof browser PeerConnection behaviour... :(

As far as I can tell, there is no effective way to stream mic-captured audio to a server.  If captured audio is exposed to the Web Audio API as discussed in earlier comments, we're still left with the encoding problem.  iSAC/iLBC et al are not exposed forcing non-native encoding implementations.

There have been some interesting JS decoder implementations at, but the performance is not great, and the same would be the case for non-native encoders when someone gets around to writing one.  This might be a spec issue, but perhaps Chris Rogers may know if this is being addressed or is out of scope?
Comment 47 by, Jun 26 2012
Why is the WebRTC server a hack? If you want low-latency distribution, that seems like the ideal way to handle this case.
Comment 48 by Deleted ...@, Jun 26 2012
I looked at the libjingle webrtc sample and I agree, implementing the WebRTC server is reasonable.

After WebRTC and Web Audio API are integrated, will a native encoding API be in scope?  Will efficient transport of audio be exclusive to PeerConnection?  A native encoder node in Web Audio API would open more transport options (WebSockets as in Marco's case, HTTP, local storage, etc).

I agree that a WebRTC server is reasonable. It wasn't the point of my use case. What I would like to do is getting the microphone input and put ti into the graph, so that I can minipulate it before sending to the WebRTC server. 
One downside of WebRTC is that it assumes an RTP transport.  So while it may be a good fit for realtime applications, a significant number of applications would prefer more reliable and conventional web transports.  Fixing this bug would be a good step in that direction.
Indeed! There are many applications, including ones that need to operate offline, where having mic input and compression for storage to the file system/transfer to worker threads/etc is important and necessary.
Serge, any updates for us? It's been over 5 months since I first filed this bug and 152 other people also think it would be the bees knees to have this working :)
waiting for this...for about 5 month....
The most frustrating thing for me is that we still have to develop on Flash (with its thousands open bugs and limitations), when with few changes in Google Chrome we could deliver an amazing experience on Spreaker to our users.

Just my cent,
Marco Pracucci
Twitter: @pracucci

The reason this is taking so long is because we are refactoring a large part of the underlying audio code. Our goal is to improve audio across the board for all platforms.

I'll update the bug in 3-4 weeks with progress on both this feature and the audio overhaul. 

Labels: -Type-Bug Type-Feature
Now that most people are back from vacation, the work here is moving ahead again.
Great! I'm ready to test.
when ready...please let us know...

THX your work~~~~~

Comment 59 by, Aug 1 2012
if <audio> can't play mediastream then is there any way to gather mic sample data ? is there any way to print sample data continues ? what i want to do is mention in sample code. 

<audio id="voice" controls autoplay></audio>
var audioElement = document.querySelector('#voice');
var video = document.querySelector('#video');

navigator.webkitGetUserMedia('video, audio', function(stream) {
  audioElement.src = window.webkitURL.createObjectURL(stream);

/* Sample code */
        console.log('audio stream is : '+s);
/* Sample code */

}, function(e) {

Comment 60 by Deleted ...@, Aug 1 2012
Thats great to hear, sergel. 
I wonder if and how you'll implement the core of the above mentioned feature requests. I am really looking forward to your next status update.

Thank you all for your great work!
Our HTML5 RDP also need the raw data from Microphone. That would be perfect if browser can provide some native audio encoder, like Alaw, ULaw, Adpcm, GSM etc.
right now, flash support speex codec,BUT all of html5 DO NOT support it. native codec and bitrate is not support or mentions in W3C draft....

so I am very curious about the getUserMedis(), when I got the LocalMedia... what is the codec? PCM? what is the bitrate?

here is a StreamProcessing Draft...

right now ... Audio Recording API is in I study..
MediaStreamRecorder probably a better place for this purpose.
Comment 64 by, Aug 6 2012
Any update on this, have been unable to get microphone access working.
bump, any news on when we can expect a fix for this issue?
Comment 67 by, Aug 29 2012
any update of this ?
Comment 68 by, Aug 29 2012
It is being worked on. However, it is a significant amount of work.
Comment 70 by Deleted ...@, Sep 18 2012
Just keeping this issue hot...

FYI: Canary now has an about:flag for:
Web Audio Input Mac, Windows, Linux, Chrome OS
Enables live audio input using getUserMedia() and the Web Audio API.

Still work to be done, but you can start playing around!
Comment 72 by, Sep 18 2012
Yes, the following have landed and are in Canary:

For now, this is Mac-only and requires enabling "Web Audio Input" in "about:flags"

Just wondering, why enable the flag on all platforms? Since it is currently only implemented on Mac.
Woot - it works! I built a chromatic guitar tuner ->
Runs in Canary obviously, going straight into the standard microphone of my MBP and it is more accurate that the tuner built into my acoustic. Pretty happy!
Comment 76 by Deleted ...@, Sep 24 2012
Nice work Craig.  I'm trying your sample on Ubuntu Chrome Dev (23.0.1271.1) - I realise that it usually takes a week for Canary to get into Dev channel so I'm almost certainly a bit early.  

The status for me (currently) is that the its grabbing usermedia and mediastreamsource, and is receiving audio data into your onaudioprocess method but I'm getting all zeros in the input[2048] array and therefore 'no peaks' and subsequently no frequency detection.

I'm hoping that in the next week or so your sample will start to work as Canary changes are pushed into dev.  

A couple of console.log() from javascript would be useful to help debugging.  I'm concerned that it might try to grab wrong microphone (I have loads of microphone devices but only one that actually works).
Comment 78 by, Sep 24 2012
Live input in Web Audio is only supported on OSX right now (in Canary).  We're working on other platforms, but they're not live yet (ha).
Comment 79 by, Sep 24 2012
Live input in Web Audio is only supported on OSX right now (in Canary).  We're working on other platforms, but they're not live yet (ha).
Comment 80 by, Sep 24 2012
I'm running Mountain Lion, and I haven't been able to get the tuner to work, either.  I'm trying to do it via the Rocksmith cable, which is pretty much a USB mic that plugs into an electric guitar output.
Comment 81 by, Sep 24 2012
Did you set the Rocksmith as your default audio input device?
Comment 82 by, Sep 24 2012
Yeah.  And the device registers when I hit the strings, just no action on the page.  I've also tried just using my onboard laptop mic, and pointing my guitar at it, no go there, either.
I don't have a Rocksmith device (not released in NZ yet) so I have no way of testing it with it - all I know is that it works from the microphone. I tired to get it working through an MBox, but without any luck. This is a small proof of concept that this works for a university project - obviously there's lots of things that still need to be considered! I do hope someone else can get it working though! One thing worth noting though is that microphone input doesn't work from a local file, which is why I ended up putting it on github.
Comment 84 by Deleted ...@, Sep 24 2012
This issue isn't the right place to have this discussion. Everyone who wants to be notified when this feature is in place has to read through these emails. No offense intended.
Of course - Anyone may feel free to email me ( if it does or doesn't work for you!
Looking forward to a Windows release.
Comment 87 by, Oct 4 2012
This is an extremely important feature that has wide ranging disruptive economy for Internet communications.  Don't be evil.  Implement this feature, please!
Comment 88 by, Oct 4 2012
Sorry for the massive chain of emails, everyone!  To followup, I agree that this feature would be an amazing influence in the world.  However, not doing something good is not necessarily the same as being evil.

Please, take your time to ensure the quality of this feature, instead of rushing out things.  Thanks!
Live-audio input (i.e., getUserMedia hooked up with Web Audio) is now supported in latest Chrome Canary (24.0.1288.0) on Windows. The Web Audio Input flag must be enabled and both audio devices (play & rec) must use the same sample rate and channel count. Windows XP is not supported.

Try this demo as an example:
Thanks for the great work, does this mean it will go to Chrome 24 (Stable version) finally? We'll have to schedule our software release accordingly.
If it works well on trunk, a merge request for M23 will be done. If the request is not approved, M24 will contain this feature instead.
To clarify, M25 seems like the most realistic release where we can see it without a flag.
Comment 93 by Deleted ...@, Oct 12 2012
For this example:
I am using win64 and the latest version of canary with the Web Audio Input flag enabled, however it doesn't work. I have tried both with the internal mic and with a USB headset. 
It asks you to allow the mic, then says it is using the microphone in the notifications area, but nothing echos back or appears in any of the displays. (Frequency, waveform etc)
Any ideas?

To support live-audio input on Windows, both default audio devices (playout
and recording) must use the same channel configuration and sample rate.
Check Properties->Advanced for your default devices and verify that
settings are the same for both directions.
will this restriction (same sample rate for play and recording device) be removed in final version? I found most of the time, they are not same.
We will do our best to make it as user friendly as possible before
releasing without a flag.
Thanks, its really quick picky with those settings. Any idea when the feature will come out on chrome for android? 
Canary for Mac is working great. The same can't be said for the windows version. The same code running on both gives excellent results for Mac, but recordings for windows buzz and hiss. Be great when this is fixed! 

Note: same hardware for each OS - so not a hardware problem.
Comment 99 Deleted
Can you describe your hardware setup and add a link to the script you are running so we can try to reproduce.
Comment 101 Deleted
MacBook Pro
Mac OS X v10.7.4
Intel Core i7

Dell Inspiron N4117
Windows 7 Home Premium
Intel Core i5-2430m
x64 based.

I notice that my previous comment implied that the computing hardware was consistent. What I meant was that the audio capture hardware (external microphone) was the same for both. 
4.8 KB View Download
Do you experience bad audio using this demo as well
I have noticed a small issue with Chrome Canary Mac OS 10.6.8.    There are options for all of my input devices, but no matter which one I select it still defaults to using whichever one is defined in System Preferences.

When I select Scarlett 2i2 here and "Internal microphone" is selected in System Preferences / Sound / Input it uses the internal microphone in Chrome.

This was tested using
16.5 KB View Download
It is a known limitation that the device selection is ignored. We are working on this issue and the goal is to have full support once released without a flag.
I have an issue with the security settings. It appears that you have to allow the microphone every time you reload the page, or presumably load any other page that uses it. This will be a major problem for the language learning app we want to build, because when a user goes to the next exercise, they will have to re-allow the mic, every single time.
There needs to be an option to allow the mic for the current domain (rather than just the current page) for the current session to stop this hassle.

If the site is hosted by https://, the user will be allowed to remember the use of the microphone.
OK thanks, thats a help. Any idea when that will be implemented, and has a decision been made about http sites? As you probably know, flash lets you remember settings for those etc. 

Comment 110 by Deleted ...@, Oct 22 2012
Быстрее, ребят)

Persistent consent for HTTP sites will not be supported.
RE: Do you experience bad audio using this demo as well

No I don't. However when I create a javascriptnode, and obtain the channel data, the resulting buffer is very buzzy.

Again this works fine on Mac.
Brendan, I looked at audioProcessing.js and noticed you're using a "JavaScriptAudioNode" with a very low buffer size of 256.  This will not work on Windows:
    jsnode = con.createJavaScriptNode(256,1,1); // We could change this to stereo

I know the Web Audio spec says this is a possible value, in practice these small buffer sizes are not practical for use on the main JavaScript thread.

Please try 1024 or higher and see if that helps
Comment 114 by, Nov 9 2012
Any changes here with new beta in repository?
With audio recording issues in Pepper Flash in the M23 (, it seems to me that this should be a high priority release. Sure hope this comes to M24.
any news about actually storing the recorded audio? 
Does anyone have working code for getUserMedia({audio:true},function(stream) { /* get stream.audioTracks as buffer, byte array, or anything? */ });  I swear I saw blob = createObjectURL(stream); new FileReader().readAsArrayBuffer(blob);  somewhere.  

I just need bytes!!!  :)
Comment 118 by Deleted ...@, Nov 19 2012
If anyone know how to play the audio stream on browser? :(
I really really want this WebRTC audio only to work.  Does anyone have any idea when that might happen?  I just want to see a simple "Hello World" example.  Something along the lines of 2 PCs with 2 People with 2 Headsets and mics.  All of a sudden "Watson come here . . . " and the rest is history!

When might this be possible?  We were planning development 6 months ago that anticipated this simple task would be working by now.  We had already seen the interactive video/audio working example from the Google folks.

Does anyone have a timeframe of when this simple "Hello World" example might be available?
Henrik, dupe if needed.
Comment 121 by, Nov 28 2012
I've got this flying on Windows 7 Canary, with a little comet server magic and alot of patience you can read raw PCM data from WebAudioAPI, process it with Speex compiled to javascript with emascripten and bobs your uncle. I integrated a working, low latency version into our HTML5 web conference software in little more than a day.

All the bits are there, as the api matures less 'hacky' ways of doing this will inevitably emerge, but works great for now. Just need widespread browser adoption now :)
Mergedinto: 157142
Status: Duplicate
Comment 123 by Deleted ...@, Dec 1 2012
This is my way how i broadcast microphone input to clients with html5:
1) Use WebKitAudio on Windows 7 Canary
2) Capture microphone input every second and stream it as PCM using
3) Send the blob with websocket (PCM is 1.4Mbit/s)
4) Put the PCM binary stream in a temporary .wav file.
5) Convert it with ffmpeg to mp3 very low quality -> 3KB/s
6) Send it to all clients and play it

I know this is a very 'hacky' way and the convertion take 0.1sec for every single file. This produces a delay, which will eventually be solved with streaming a buffer. Is there a better solution?
I am waiting
nikrofilizm: The requested functionality originally asked for in this thread has been added to Chrome and exists in Canary.

Comment 126 by Deleted ...@, Dec 30 2012
Comment 127 by Deleted ...@, Jan 14 2013
I'm able to make this work in 24.0.1312.52 on Mac by enabling it in chrome://flags. I can make it work in Canary (26.0.1382.0) on Windows 8 by doing the same. Windows 7 does not work for me, the difference is that I'm using a Logitech USB webcam (orbit) with integrated mic. I'm using that mic has primary input on the windows 7 box, and I'm not able to get any audio. Using to test.
clayrongulick@: The most probable reason why it does not work for you on
Win 7 is that the USB webcam uses a different capture sample rate than your
output device (that's my theory at least). The current implementation of
live-audio on Windows requires that both sides uses the same sample rate.
If possible; try changing  sample rate(s) and ensure that they are the same
in both directions.
thank you!

I used not a webcam mic, actually. I have a studio microphone with M-Audio
Fast Track Pro audio card, maybe this will be helpful?

I have tried all kind of examples (also non of them work on my Chrome 24.0.1312.52 and Chrome Canary 26.0.1382.0 on Win 7. I have successfully added stream and send through peer connection and sound is very good, but the problem I can't analyze the sound using Web Audio API with MediaTrack. I am sure that my mic is working good using WebRTC. Tried on 4 different computers with same specifications and versions. What can be the problem? Thank you!
Can you check what sample rate (and channel configuration) you are using in
both directions and report back?
I am using 2 channel, 16 bit, 44100 Hz [CD Quality] sample rate. For mic and for speaker. Thanks!
не понимаю
 14 Січ. 2013 15:48, <> напис.
Note that any usage of live-audio input requires that the experimental flag Web Audio Input is enabled (see chrome://flags). 

With that enabled, and in combination with identical sample rates for in and out, demos like should work.
I have already enabled this:

Web Audio Input Mac, Windows, Linux, Chrome OS
Enables live audio input using getUserMedia() and the Web Audio API.

It enables getUserMedia and Web Audio API , so there is no need to enable Web Audio Input separately. 

I am really stuck..
The flag does not "enable getUserMedia and Web Audio API", it enables "live audio input using getUserMedia() and the Web Audio API" and that is not the same thing.

Let me check again:

- you are on Windows 7;
- with the Web Auido Input flag enabled;
- and using

Next, you grant access to the default microphone which is configured to run at the same sample rate as the default output device.

Then what happens?
Nothing happens, no graphic changes on the visualizer even I scream to the mic. Thanks!
Please note that the mic and camera you approve using chrome://settings/content and the Media section is not the one that will be utilized in this demo. Instead, the default microphone will always be used. Just checking to ensure that this is the one you have verified having the same sample rate and channel config as the default output device.
Yes I pick the mic from the list in the Options when Allow/Deny toolbar is comming.

By the way, I can't find any information about my mic and webcamera here: chrome://settings/content , it is only giving security option to ask or deny accessing my webcam and mic.

It is supported in latest Canary but what I am trying to say is that your selection is ignored in combination with live audio input. I.e., the default microphone will always be utilized. All you do with getUserMedia is to allow access to any microphone.
Using same Chrome Canary I have successfully connected with a peer connection to other Chrome browser and the sound can be heard very well, but using the same stream with webkitContext and analyser it is not giving anything. Using the same Chrome Canary and same mic settings I used demo , nothing happend. So in conclusion, the audio is transmitting within the stream, but it is not taken by web audio api for some reason.
I might be doing a bad job explaining here but what I mean is that the two cases you describe have very few parts in common. Hence saying that #1 works gives no clues to why #2 does not. The live audio part is under a flag and experimental only. That's why there are restrictions on things like sample rate and a non perfect device selection.
Yes I understand that it is experimental, just want to know what should I do to make it work, because the demo is not working on non of my 4 pc with windows 7 and 1 mac.
Can you experiment with another (USB) headset and use that as default
All pc's have different headsets, MacBook have builtin one, other ones are using headsets with audio jacks (not usb).
I have tried Chrome 26.0.1382.0 (Official Build 176602) canary with enable-webaudio-input flag set and can run the demo on my Windows 7 machine. Using a USB headset, 2 channels at 48kHz in both directions.

I am currently unable to say why it does not work for you.
Ok I will continue my investigations and will let you know if I find any problem, let me know also if you remember anything and think it can solve this problem, thanks!
Comment 148 by Deleted ...@, Jan 14 2013
henrika: Thanks for your work on this and your assistance! I'm excited about this functionality because I have a client who's interested in implementing an online music application, and we were hoping to avoid writing it in Flash. We're trying to decide if  we should start down the html5/web audio path in the hope that by the time the application is complete, Chrome will have a stable implementation of real-time audio capabilities (we're ok with requiring chrome).

I did double check that the sample rate (48k) was the same on both input and output on windows 7, and I'm still not having any luck - but it's equally probably that the issue here is with the logitech drivers, and not with chrome. The UI says I'm recording at 48k, but who knows what's actually being sent to chrome through the driver chain and windows audio. I can see it work on my mac, and windows 8 box (with built-in mic).

Rather than trying to debug the specific issue with my usb mic, I'd like to know in a broader sense what your recommendation is for the direction we should take our application. Do you see real-time audio processing with various input sources and the web audio api being stable and released in the next year? Or is this more of an experimental "this is cool to play with" kind of technology?

Should we place a bet on html5? Or take the safe road with flash? It looks like Mozilla is a long way off still: - they appear to be much more focused on WebRTC, but like I mentioned, I'm ok with requiring our users to install Chrome.

Thanks again!
Status: Fixed

Let me start with an admin issue. This thread is actually about adding support for local rendering in combination with getUserMedia. The issue is solved and marked as fixed in 157142. Any input added here also affects 157142 which can be confusing. To break this dependency, I will mark this issue as Fixed instead and then continue the discussion.

Marked as Fixed. See for more details.

claytongulick@: I understand your concern and would like to provide a clear answer but feel that Chris Rogers is more suitable since he is the main developer and editor of the Web Audio spec (
henrika: Like claytongulick, I am also trying to get something working through basic webcams on windows 7.  My focus has been widely available webcams I can find at big box retail outlets.  I started with Logitech, but I'll pick up some HP and Microsoft devices today, too.  So far, Logitech C920 works on Mac, but not on Windows, assuming it's set as default (as the thread suggests).  C525 doesn't work on either Mac or Windows, but I'm suspicious of how well the UVC drivers support C525.  C920 doesn't have much in the way of sampling rate (only 32kHz), so given MS default playback, I couldn't match.  C525 was set to 48kHz, both directions.  Also of note, on Windows 7, when I disabled the primary output device, I got aw shucks. From what I can tell, looks like that's where you pull you sample rate. I can feedback my findings on other webcams if you'd like.
This thread is moving forward well -- let's get this working!
The discussion has drifted too much from the original topic (which is now
solved). All input is appreciated but could you (e.g. jason.a@ or
clytongulick@) start a fresh issue report and re-apply the last input? A
more suitable heading could be "Issues with the experimental live-audio
implementation (MediaStreamSource)". It would allow us to provide feedback
that would make sense to more users who search for answers to their

Thank you.  I have created a fresh issue report, titled as suggested:

Comment 154 by Deleted ...@, Jan 22 2013
Comment 155 by Deleted ...@, Jan 26 2013
come on guys
ackise@: please elaborate; exactly which functionality are you waiting for?
Comment 158 by Deleted ...@, Feb 16 2013

I have been using the Web Audio API for weeks, in particular to record user audio from the built-in microphone, and suddenly it stopped working. Is there any change recently in this API implementation? 

I checked that default sound input and output sound devices are correctly set and have the same rate (48kHz). Whatever the configuration, the microphone outputs only null values. 

I cannot run simple example such as the following, which should echo the audio input on the audio output:

var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
  var context = new webkitAudioContext();  
  navigator.webkitGetUserMedia({audio: true}, function(stream) {
    console.log("Connected live audio input");
    liveSource = context.createMediaStreamSource(stream);
// disconnects the audio input
function makeItStop(){
   console.log("killing audio!");
// run this when the page loads

Any help would be appreciated.

Regards, Gilles

I assume you are on Windows and rely on Canary. Live-audio has been broken for a while on Windows but it was fixed last Friday and should work in latest Canary.

Please try again and report back.
Comment 160 by Deleted ...@, Feb 16 2013
Thanks for your help, 

Indeed, I am on Windows 7 and I was running Chrome Version 26.0.1410.5 dev-m.
I installed Chrome Canary (Version 27.0.1413.0 canary), but the problem is still the same. I tried on other computers as well (different HW and different releases of Chrome) without success. I'm not sure where the problem comes from. Are there some sample programs or websites that demo the audio recording part?

Regards, Gilles

and ensure that the Web Audio Input flag is enabled.
Comment 162 by Deleted ...@, Feb 16 2013
Thanks for the link. I have Web Audio enabled but I have no sound recorded. I guess I should see nice waves on the demo page, but there is none (cf. picture attached).

I've been using this feature for months and now I cannot make it work... It's a mystery.
39.9 KB View Download
I'm having the same problem as Gilles.  Seemed to just start happening.  Not working in Canary.  
Should have read above.  Web Audio Input has to be enabled in about:flags.
tsargent@: could you please check which version and platform you are
running using chrome://version.
Chrome version: 24.0.1312.57 (Official Build 178923)
Platform: OS X 10.8.2
henrika@: Chrome version: 24.0.1312.57 (Official Build 178923) 
Platform: OS X 10.8.2
Comment 168 by Deleted ...@, Feb 19 2013
I'm having the same problem as Gilles as well, and the problem is started 2 days ago, and the sample code I used is OK for last months.
Chrome version:26.0.1410.5 (Official Build 182376) dev-m
platform: Windows 7
Please try latest Canary. works on Windows 7 on
Comment 170 by Deleted ...@, Feb 22 2013
For browser-based audio production tools, such as a drum sequencer that one might be building, it would be great to record a stream of audio from samples that have been previously loaded. In the case of a sequencer, there are many wav files being handled in different buffers, but I'm still searching for a way to feed those buffers  to a single output for a stereo mixdown. This mixdown would then be bounced/rendered to disk and saved to the client machine as a PCM (bit-depth)bit WAV file. Hmmm. Any ideas?
For recording you can use a ScriptProcessorNode to intercept the audio stream and record.

I wrote a small library to help with recording WAV files client-side: </shameless self-promotion>
Comment 173 Deleted
Comment 175 by Deleted ...@, Mar 1 2013
I am unable to get any page with web audio input to work in Chrome Version 27.0.1426.0 canary on 
Mac OS X 10.7.5  (e.g.

Web Audio Input flag is enabled, microphone has been allowed
No output in the Dev Tools console

The icon in the address bar is a video camera - not a microphone

Project Member Comment 176 by, Mar 11 2013
Labels: -Area-WebKit -Feature-WebRTC Cr-Content Cr-Internals-WebRTC
Comment 177 by Deleted ...@, Mar 13 2013
Please, i  really want  video and audio messages ^_____^
jones@: please try latest Canary. You should no longer have to set the flag since WebAudio input is supported by default. The demo you refer to works well for me on Mac 10.8.6.
Comment 179 by Deleted ...@, Mar 19 2013
Hey guys,

Awesome work. Just got curious to test the api. Tried my best but was not working. After going through all the comments I finally realized that issue was of tuning both input and output. Can someone post the minimum things required on the project homepage to run the api? 

Project Member Comment 180 by, Apr 6 2013
Labels: -Cr-Content Cr-Blink
Comment 181 by Deleted ...@, Apr 11 2013
I have last Chrome Canary and capture audio NO WORK:

Google Chrome:	28.0.1473.0 (Build oficial 193311) canary
Operating System:	Windows 7 

I have the live microphone input, utilizing the analyser to visualize the sound, working fine in Chrome 24.  The same code shows the mic input levels at zero as of Chrome 26.  Can anyone verify if this is a Chrome bug?
Ok, this page and my previous code both work correctly for me in the current Chrome Beta (Version 27.0.1453.47 beta-m) on Windows 7.  However, I have a virtual machine running XP 64 and another physical machine running windows 7, that do not work on either example running the same latest Chrome beta (Version 27.0.1453.47 beta-m).  So, what could it be? (This is in reference to live webaudio input and the analyser node.) Also, is there somewhere else I should post this? Thanks.
Most probable reason for failure on Windows is that you don't use identical sample rates for input and output. It is currently a requirement. There are no guarantees that virtual machines are supported.
Any idea when then the sample rate requirement won't apply
Simply check settings for the default input device and output device. If they differ, live-audio input will not work on Windows.
Comment 187 by Deleted ...@, May 2 2013
I configure same sample rates in my Windows 7 and I can capture audio !
What about Windows XP?
getUserMedia() is supported on Win XP but not live-audio input.
OK, so I still cannot get Win7 64bit working with Chrome 27.0.1453.81 beta-m.  I have gone to Win Control Panel and set MIC to 44.1k; 16bit; 2channel and AUDIO playback 44.1k; 16bit.  (I attached as screenshot of the settings).

I was using the test app: 

Am I missing something?  
36.7 KB View Download
joe@: hard to tell; it should work given the settings you describe. We have recently done substantial modifications related to WebAudio on Windows. Could please try the same demo using Chrome Canary on your machine. Thanks.
@henrika - i was finally able to get it work.  I needed to set Chrome content settings to default mic.  I will now play with settings.  I will also get the latest "production" chrome too.
I forgot to mention that only the default mic is supported. The restriction on sample rates (same in both directions) is removed in Canary. Hence, you can select any combination of devices and sample rates as long as they are default. 
@henrika - i downloaded chrome production "Version 27.0.1453.81 m" -- it works fine. If the bitrate is set too high (192k; 16bit) it fails.  44.1k 16bit, 48k 16bit, 96k 16bit all work.  But that should be fine.  thanks - i did not try on Canary.  Hopefully in the future the settings - if mismatched will work.  It may be tough to have the end users fiddle around with the settings.
Canary will support many more combinations and will reach Dev, Beta and Stable after some time.
Thanks for working on this! I just tried Canary on Windows and when sample rates differ between input and output device, the sound is indeed working! However, it is wobbling, sounds like varispeed. Maybe related to:

bjorn.melinder@: you must have golden ears to detect the effect of matching the different sample rates ;-) What combination of sample rates are you using? Can you send an output recording? I am guessing you are a musician and is listening to a specific tone where you know your reference really well. Can you hear any effects using pure voice as well?
Henrik, if necessary we can tune the smoothing to reduce any potential artifacts there...
henrika@: Golden ears, haha, I don't know if golden ears are required for this :) Please listen to the attachment around 0:04. It's an electric guitar through a cheap onboard soundcard on a PC. Input is set to 48kHz, output 44.1kHz. When changing both to 44.1kHz, the vibrato goes away. 

After that, Canary audio input stopped working so I can't tell whether speech is affected. Will reboot machine and try again after that.
varispeed windows.wav
2.8 MB Download
Got it. Might need some tuning from our side there as Chris states. In the mean time; hope you are OK with the "sample-sample-rate-mode" on Windows.
Of course - a lot better than no sound at all, it's a big usability improvement. Thanks!
Project Member Comment 201 by, May 24 2013
Labels: -Cr-Internals-WebRTC Cr-Blink-WebRTC
Comment 202 by Deleted ...@, Jul 3 2013
We are waiting for adding this feature ASAP!!
Using CEF 3.1650.1503 (Chromium 31.0.1650.16) and Latest .Net/Mono (CEF3) still not working yet.  Any update please.  Thanks in advance!
whindes@: It is not clear to me what the issue is. Are you saying that getUserMedia fails for you on Chrome M31? If so, please add more details about what demo/application you are testing. Also add platform specifics. I am unable to assist on issues related to CEF.
henrika@:  Yes, here are some details -->

Not sure how to configure/set flags if necessary.

Using Source from:
 Update to CEF 3.1650.1503 (Chromium 31.0.1650.16) on 2013-11-03

Using Visual Studio 2010 32 bit - Windows 7

Testing CefGlue.Demo.WinForm using URL

function getUserMedia(dictionary, callback) {
try {
navigator.webkitGetUserMedia(dictionary, callback, error);
} catch (e) {
alert('webkitGetUserMedia threw exception :' + e);
/* webkitGetUserMedia threw exception:TypeError: Object#<Navigator> has no method 'webkitGetUserMedia' */
Can you please try a regular Chrome and avoid mixing in CEF. This thread
deals with pure Chrome related issues.
Status: Verified
Using just chrome 31.0.1650.16 with the page works fine, the visualizer reacts to microphone input, microphone input is looped and played back. 

visualizer-live.html calls getUserMedia with audio only as a constraint:

function init() {
    getUserMedia({audio:true}, gotStream);

Hence the original issue (read the first initial report) is fixed. I suggests creating new bugs if you still encounter problems. It will more likely be fixed as well. Golden rule: one bug/issue per bug report.

Marking this as verified.
Comment 208 by Deleted ...@, May 9 2014
Please add the kovunec in the toolbar
martyanovdenv@: I am unable to see how your comment is related to this issue (which is closed BTW). If you want to report problem, please open up a new issue and file it with a suitable title.
Comment 212 by Deleted ...@, Jun 10 2014
Sign in to add a comment