New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 80 users

Issue metadata

Status: Assigned
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 1
Type: Bug-Regression

Sign in to add a comment

common AudioContext/getUserMedia pattern broken due to changes in autoplay policy

Project Member Reported by, Apr 23 Back to list

Issue description

Chrome Version: 66.0.3359.106
OS: (e.g. Win10, MacOS 10.12, etc...)

What steps will reproduce the problem?
(1) go to
(2) observe the bars while speaking

What is the expected result?
they move.

What happens instead?
A console warning is shown:
The AudioContext was not allowed to start. It must be resume (or created) after a user gesture on the page.
Props to QA for actually noticing this:

This breaks a number of websites, including google meet, that use AudioContext after getUserMedia to visualize the microphone level.

Adding autoplay-policy=no-user-gesture-required reverts to the old behaviour.

Is blocking AudioContext required when there is an active MediaStream which suggests a somewhat high level of trust?
Components: -Blink>GetUserMedia Blink>Media>Autoplay
Labels: -Pri-3 Pri-2
Status: Assigned (was: Untriaged)
safari seems to have a policy to allow this if getUserMedia is on:
There's a bunch of sites impacted by this bug. The user has already given explicit permission to use the microphone, but we still can't access it unless they click on something on the page every time they visit the page. 
Labels: -Pri-2 Pri-1
Labels: OS-Chrome
Labels: -Type-Bug Type-Bug-Regression
Status: WontFix (was: Assigned)
I can reproduce this on Chrome and it looks like we are blocking the AudioContext because there is no user gesture and Safari is allowing it because they allow AudioContext to play if getUserMedia is on.

The problem with the Safari approach is that they are simply allowing any AudioContext to make noise if getUserMedia is on. This is quite risky as it means that a website could potentially pipe any audio through that context.

Our autoplay policy has a number of ways to be allowed to autoplay which should be sufficient for WebRTC apps including:
 - Having a user gesture on the page. This is also passed through navigation so long as the page is on the same eTLD+1. Most WebRTC apps should be able to capture a user gesture here.
 - High Media Engagement. We have seen a number of WebRTC apps able to successfully achieve a high MEI score.
 - Apps, Extensions and Hosted Apps are all unaffected by the autoplay restriction.
 - URLs can be whitelisted for autoplay through Chrome managed policies.
Let's discuss this better before jumping to wontfix. "Enable legitimate uses of autoplay without complicated workarounds" was one of the design goals.
Status: Assigned (was: WontFix)
Agree with Huib.

Sorry to change the the status back like this, I would like for affected parties to be in agreement before we mark the bug as fixed/wontfix or decide on the next steps.
Thanks for reopening.

FWIW, Jitsi Meet is suffering from this, and one would think ir engagement
score is high enough, but that doesn’t seem to be the case for a fresh user
profile, resulting in a craptacular user experience.

As for gestures, if you use a full URL which includes the room, the user is
dropped directly into the conference, so zero clicks / gestures are
involved, so expecting one is also a flawed assumption, not every
conferencing service has a “hair check” screen.


Our recommendation here is to require a user interaction before playing. If these sites cannot autoplay because they don't have a gesture they should start in a muted mode or display something to the user to acquire the gesture.
The problem I have with our product ( is that we're just trying to grab the microphone to listen for a short high frequency audio sample. I'm not even playing audio and getting penalized for this. If there's another workaround, I'd certainly like to hear it (No pun intended :) 
beccahughes: unfortunately this was not communicated in public to the webrtc ecosytem at all. And the recommendations you link do not mention WebRTC.

This change breaks a number of well-known sites, including:
* (uses it in a pre-call haircheck)
* jitsi meet (uses it during the call; see Saul's comment above)
* (uses it during the call; tokbox is one of the largest webrtc platform providers)
* -- google hosted sample application for how to use this; no updates
* (see
* (oddly not in 66 but only 67?!; uses this is in the haircheck and in the call)

Note that we are talking about a use-case where the user has given permission to the microphone. This requires either a click on the permission dialog which should be counted as an interaction or a persistent permission which implies an even greater level of familiarity with the site.
First-time user interaction may include a button (see e.g. the behaviour of when you enter without permission) but if a persistent permission is detected this goes down a different path.

Note that that the video elements used by WebRTC typically do not have controls even which severely limits the users ability to interact with them. Pausing them (as done in 824866) makes no sense as they are realtime.

When you say 
 - High Media Engagement. We have seen a number of WebRTC apps able to successfully achieve a high MEI score.
can you give some examples?

I would not mind making changes. However, this has demonstrably been flying under the radar and not been communicated properly and running into this as a regression when Chrome 66 is rolling out as stable is not a good way to get acceptance for this.
@fippo - In terms of communications we announced this back in September so that sites could get ready for the upcoming changes and provide feedback. We are unable to disclose MEI data for specific sites.
This is affecting all of our customers at TokBox. We have an API built on top of using the audioContext to surface an "audioLevelUpdated" event We also display the audio level to users when they are in audio only mode. This is all not working. 

You can see an example at
The audio level does not move in Chrome 66 but it did previously.

I agree with fippo that giving device permission should be enough for the audiocontext to work as it is with Safari. This also caught us by surprise. 

This is also affecting our customers at Twilio. We built an API on top of AudioContext surfacing a `volume` event with `input` and `output` values for a call:

Our examples repo with audio level indicators is found here, and has issues on M66:
> The problem with the Safari approach is that they are simply allowing any AudioContext to make noise if getUserMedia is on. This is quite risky as it means that a website could potentially pipe any audio through that context.

I don't agree that this is a problem. If the page has been granted permission to the microphone and camera, the page has the ability to record the user, as well as piping that stream to pretty much anything, including broadcasting the user to millions of people if you're really nefarious. That the user might get audio playback is quite frankly at that point the least of the user's worries.

What I'm trying to say is that if the page is trusted enough by the user to be allowed to do that, then implicity its MEI score should go through the roof, and subsequently allow playback of audio (but allowing analysis of audio with no audio output should be allowed anyway imo. but we don't have a good system for that)
I talked to Webex today that's also experiencing problems with this, I've asked them to add their input on this bug. For applications like this it's very common that a calendar/email link brings you directly into a meeting without any user action on the web page.
This issue would simply go away if the getUserMedia user interactions and settings were respected in relation to autoplay. When you do a getUserMedia call to access camera or microphone, the user has to approve access to their device. If that interaction was taken into account for the autoplay decision, that would fix this issue.
We also have chromebox for meetings that are difficult to always require users to interact with the page for it for audio to work, besides, the mute proposal wont work because the unmute action is done via the speaker phone which probably wont consider it as an interaction with the page.
Can't boxes set an enterprise policy to disable autoplay permission?
It will also be useful to have a synchronous way to tell if a video can be played with sound or not.
In a web conference there may be participants that don't activate the mic. Everything is async anyway and having to deal with artificial limitation complicate stuff. For example, by the time We get the an error that a video can't be played with sound, the stream may be already gone.

Comment 30 Deleted

Comment 31 Deleted

Labels: Hotlist-Enterprise ReleaseBlock-Stable
I'm flagging this bug as a blocker for some of the Google services. The Apps/Hangouts team may need help from Chrome team to find workarounds. [ b/78484818 ]

Comment 34 Deleted

Labels: -ReleaseBlock-Stable
#29: we are working with Apple and Mozilla on a new API to do this.

#33: royans@, autoplay restrictions are known to impact Google properties as much as other websites. We shouldn't block Chrome release based only on this. Happy to discuss what Hangouts can do offline. I'm not sure we should use the public bug tracker for internal products.

Comment 36 Deleted

Comment 37 Deleted

so great to get updates about this issue... from

(I assume most people here are aware of issue 840866 but mentioning it for completeness sake) 
As noted in issue 840866, the changes to autoplay behavior have been partially rolled back in Chrome 66. Details:

> We've updated Chrome 66 to temporarily remove the autoplay policy for the Web Audio API. 
> This change does not affect most media playback on the web, as the autoplay policy will remain in effect for <video> and <audio>.
> We’re doing this to give Web Audio API developers (e.g. gaming, audio applications, some RTC features) more time to update their code. The team > here is working hard to improve things for users and developers, but in this case we didn’t do a good job of communicating the impact of the new > autoplay policy to developers using the Web Audio API.
> The policy will be re-applied to the Web Audio API in Chrome 70 (October). Developers should update their code based on the recommendations at: >

From the WebRTC side, we are going to work on landing some of the improvements discussed in this thread, including We will try to communicate the status of this work regularly.

Sign in to add a comment