New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 215 users

Comments by non-members will not trigger notification emails to users who starred this issue.

Issue metadata

Status: Assigned
EstimatedDays: ----
NextAction: ----
OS: Linux , Windows , Mac
Pri: 2
Type: Bug

Show other hotlists

Hotlists containing this issue:

Sign in to add a comment

Allow audio to autoplay, but optionally mute output

Reported by, May 8 Back to list

Issue description

UserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36

Steps to reproduce the problem:
Autoplay restrictions on the web have long been inconsistent and served only to impede legitimate use cases. Now Chrome 66 has transferred the restrictions to desktop, a lot of content is needlessly broken.

I've described this in detail in the following blog:

What is the expected behavior?
Allow audio playback on page load without any user interaction. Just mute the master output, and unmute it on the first interaction, or when whatever other heuristics deem it OK to play audio.

This still doesn't allow audio to be heard on page load. It does not relax the restrictions at all. It doesn't make it any easier to abuse audio playback. It just means there are no code changes necessary and existing web content keeps working.

What went wrong?
Web content should get audio playback with automatic unmuting by default, and opt-in to queuing audio until the first input event. Instead they are broken by default and have to opt-in to getting any playback at all. This makes no sense and has caused a lot of unnecessary breakage on the web. It also makes it unnecessarily difficult to get audio playback to work at all.

Did this work before? N/A 

Does this work in other browsers? N/A

Chrome version: 66.0.3359.139  Channel: stable
OS Version: 10.0
Flash Version: 

Abusive content will just blare out audio at the first opportunity. They already can do so in that first user input event. All other restrictions only impede legitimate use cases. There is no reason to require legitimate developers to jump through hoops to get playback to work. The unmuting approach can already be implemented in JS land, but it requires extra complexity to implement a queuing system. If the browser does it, it's backwards compatible with the vast amount of existing web content, and removes the extra complexity from future JS developers.
Showing comments 39 - 138 of 138 Older
Thanks for opening this ticket. I've been making interactive audio sites for five years and they've been doing just fine until this change. As was mentioned earlier in the thread, my site stopped working (even though it is driven by user gestures!) and I'm getting tons of emails about it.

I run a record label that specializes in interactive audio websites. Thankfully only one of them broke ( I don't want to build a business on the premise that I need to constantly maintain a growing list of websites. It's *completely* unacceptable to assume that developers can just pop in and update every website they've ever built.

Apart from my obvious disappointment with this change, I want to throw it out there that adding a play button or waiting for a gesture is not an acceptable fix, and that Chrome should not be in the business of telling me how to design interactive websites. I want to have the flexibility to establish an experience however I want. I don't think that's an unreasonable request.
I have no stake in or personal connection to Soundslice, but it's one of the most innovative web applications I've ever seen, it's a real business with paying customers, and this change negatively impacts them.
Author of howler.js here. I can confirm that I'm seeing lots of reports of issues of various kinds due to this change. We are seeing issues in our own games as others have stated as well. The thing that makes it even worse is that there doesn't seem to be a way to detect that the audio is locked. The promise doesn't resolve or reject when calling AudioContext.resume() for example, which makes handling this within our library a huge issue.
That context.resume() doesn't resolve or reject would be a bug.  The only I can think of resume() not resolving successfully is if the audio device is not working.

It would be great if you could file a bug with a repro case for this.
As an educator who relies on the WebAudio API as an appealing aspect of my
course, my students are now less excited about web application development,
which for many of them is there first and only experience with coding.  I
would hate for this issue to be the obstacle to a career in coding for any
of my students.

Comment 47 Deleted

Beyond audiocontext, I'm very concerned that User Gesture Required is being integrated into many of the new APIs being created.

There are a lot of use cases that this restriction disables, and while I'm in favor of the security and quality of life enhancements we're trying to realize by having User Gesture Required be this new default mode of the web, some space must be made for more advanced web pages to perform more complex actions at an at-will basis.

We're seeing a host of problems crop up as more and more of the web is moved behind this new gate that browser's have suddenly, rapidly started pushing on the web experience. For instance, users who use async code are having critical issues trying to maintain the functioning systems they once built:

The penduluum has been swinging further & further towards User Gesture Required, and I find it extremely ominious & terrifying that this has been so accepted, so dominant & driving a change for the web, so fast, in spite of having so little time to test it, so little sensitivity towards the breakage it's brought, & so little talk of finding explicit points where we can open things up again, such that not all systems require interactivity for them to function. It seems like this has been the default response to any form of API abuse (ex:, and it's severely crippling infinite numbers of creative new uses that the web platform ought to be able to serve, and no longer can.

Something has to be done to permit web pages some of the autonomy they once enjoyed. User Gesture Required can not be permitted to cripple the web platform like it has been. The trail of damage reported in this bug is sad, but it's only the first signs of what we've already started to lose. Over time, it's the countless new things we could have created, that we no longer will be able to, that will vastly outweigh this sad sad bug report, if User Gesture Required is permitted to expand with no way to trust or negotiate to a higher level of autonomy for a website. The User Gesture Required Intervention needs an intervention, needs to create some carefully defined areas where this strict behavior can be relaxed, so that the web can remain competitive with other software platforms.
Luckily my web applications are all user-gesture based and it was straightforward to change my underlying sound logic to handle the resume. It's right here if it helps anyone else who stumbles on this thread:

These are my highest traffic'd sites each in the low millions of visitors per month:

But, as you can see by popping open the console I still create the AudioContext on page load and get the audio warning that other sites receive.

Professionally I make the website for and I think this change also has implications for immersive web consumption. While not particularly implemented in Chrome yet.., there are discussions on being able to go from link-to-link in HMD or other presented context. For many immersive experiences sound is not only integral but expected on initialization as a way to ground the user and to get their bearings in the virtual space. Forcing a user-gesture strongly inhibits the web applications ability to perform to the high standards expected in native applications.

Anyway, it's great to see such awesome use-cases of the Web Audio API! Aside from this audio change I haven't touched the Patatap client-side code in 5 years: a great testament to the web's stability.
Yeah, the warning comes when the context is created.  That's probably not the best place, because it's not really an error to create a context.  But determining when you've actually started audio is quite a bit more difficult to do reliably without having to decorate manually everywhere it can happen.

Comment 51 Deleted

This has completely broken my open-source audio tools:

It has significantly degraded the experience on some of my open-source toys and games:

I have other games currently in development this has affected.

I want to second Dave's comment above that API changes should be all or nothing. In the US, 83% of voters support net neutrality. The same reasoning applies here as there.
I just want to point out that, without auto-starting web audio, it would be impossible to do a complete HTML5 remake of the "Real Ultimate Power" website - or any other modern rendition of that 90s <bgsound="foo.mid"> aesthetic.  A tragic loss to web culture indeed. This piece of 90's web art is broken by the change.
Chrome Product Manager for desktop here. Thank you for posting these examples, they're superful helpful to the team. We didn't intend to break all this awesome existing content that relies on webaudio, and we are investigating paths forward now. More updates will follow on this bug.
Maybe you didn't intend to break everything, but you didn't listen either when I reported the impending massive breakage 6 months ago. It was as easy as activating the relevant #autoplay-policy flag in Chrome and trying any web audio demo (95% of which were already broken).
See also:
It does feel like Google's been ignoring developers on this one. Look at  issue 280161  and see me trying to point out some of this nearly five years ago and getting nowhere. This could have been avoided if the original mobile restrictions made any sense before they got transferred to desktop.
These are the three issues I have with the changes:

1) Although it was only intended to deal with autoplay, this update also broke a tremendous amount of user-initiated audio. Infinite Jukebox is a great example: . The site is not programmed to make a single sound until the user voluntarily hits "play", but this update still broke it all the same.

2) No interface was provided for the user to manually allow audio that's been flagged. The site simply breaks. Providing a notification where the user can say "Yes" or "No" or "Just Once" to a website trying to play sound would be vastly preferable. This solution would still block abusive cases of autoplay, while fixing benign audio use.

3) The whitelist system implicitly acknowledges that there can be use-cases where users reasonably expect and accept autoplay, such as a media site like Youtube. But the criteria for whitelisting is based around popularity rather than design, essentially creating a two-tiered internet for audio. How is it appropriate for small websites to function differently and require more stringent coding practices than large websites?
This has broken the audio component for the text adventure games that have it hosted at 

Comment 63 Deleted

Playback on my website at is broken. Even after clicking for songs, I get an error "Uncaught (in promise) DOMException: play() failed because the user didn't interact with the document first.". The error message is false, because playback doesn't work even after user interaction.

Playback at fails with the same error.
This website was broken by the recent changes:
Google's search page timer functionality also breaks due to this change:
Almost every PICO-8 game published on the web have been broken by the recent changes: e.g.

Comment 68 Deleted

"Can you please provide more concrete links to sites that are affected"

Here's my site:
An example from the site:
It's all free html5 toys, some were made with Emscripten+SDL1 and some were made with a Nim+TypeScript+WebGL+Tone.JS stack. *All* of the pages on the site are broken by this change (if you followed a link from outside the site to get there, the page is permanently silent, the sound never starts).

There are some things about this site that I think are interesting in context of this change.

1. Most pages on the site are noninteractive animations

I'm basically making WebGL movies. Most of the pages have a DOM that consists entirely of one canvas element, and don't have *any* event code. Some of the pages are extremely short. A couple of them would have their code potentially nearly doubled by the requirement to add interactivity just to generate an event to enable WebAudio permissions.

2. I tend to link the pages directly from off-site

If you visit the links above in Chrome 66, you'll find something interesting. If you visit first, then click on one of the animations, you get audio. If you visit the moiresea animation directly, no audio. The way I mostly use my site is I link the pages directly on Twitter, so this impacts my main use case. The way it worked before Chrome 66 is I have a Twitter card for each page, previewing the animation; if the user clicks on the card, they are taken to my site and animation starts immediately. In Chrome's new world, I'm expected to set things up where they click my twitter card, then click a second popup DOM element to get audio, then they watch the animation. That is kind of awkward.

You might say that the user might not want audio, so it's good that I have to throw up this DOM popup to confirm they want audio before showing the animation. But there's something interesting here. Let's say that instead of posting my animation as WebGL, I recorded it somehow and posted a 5 minute video to YouTube. In this case, if the user clicked from Twitter to the YouTube video I had posted, *the video would start playing immediately, whether they wanted sound or not*. In other words, this policy change by Google is essentially punishing me for hosting my movies on my own website instead of on YouTube. That is very weird.

3. The first five seconds of sound are important

Again, these are WebGL movies. Several of the animations on the site begin with sounds in the first few seconds which are interesting and are not repeated. This means a solution where tabs start muted and the user takes action to unmute them won't work great for me. One way or another I need a solution where the animation itself does not start until the audio permission issue is resolved.

Based on all this for my personal site's use case, the ideal solution would be if there were an synchronous popup like alert() or prompt() that allows me to request audio permissions. This would mean I could detect a chrome user, halt the page load to display the synchronous popup, have (or be correctly denied) WebAudio permissions by the time the AudioContext is created, and I would not have to rewrite my actual app code. If somehow I could in this process request to acquire a site permission, so that the next time someone visits the page, that would be even better.

One more note. As Google seeks a solution to this, it would be ideal if they could work through the web standards process. Right now you have to add browser-specific workarounds for both Safari in order to get sound on a fully standards-compliant WebAudio app, and Firefox is likely to add a third path at some point. The standards process exists to avoid this exact situation. Google should work with Apple and Firefox to modify the standard so it anticipates the existence of autoplay restrictions and gives app developers a set of steps we can follow that allows us to play sound on all of these browsers with one code path.
This broke our recently released social WebVR product
It also broke the automated bots we use via puppeteer to test the product.
I had to update ALL the WebAudio examples from the MOOC HTML5 Apps and Games at, that gets several thousands of students. I got tons of messages in the forum complaining that the examples stopped working. I also use WebAudio on multiple examples of the "Multimedia" chapter on another MOOC at (HTML5 coding essentials). For my research purposes I also developed several WebAudio apps such as or or that stopped working. I had to update a multitrack WebAudio player at too....
YouTube iframe API doesn't provide a way to tell that autoplay failed.
If you embed a youtube video on a website and use the javascript api to play it, you have to play it muted because there is no way to tell if user interaction is needed or not.
There is a separate issue tracking WebRTC-impacted pages here: Several large WebRTC pages are affected, amongst them Google Meet, Hangouts and Chromebox for meetings. Services based on Tokbox, Jitsi and Twilio are also affected.

Our own service Confrere ( has problems playing back interface sounds, such as incoming call sound, new user entered waiting room etc. I suspect this is true for other applications as well, especially chat applications.

With this change Google are basically saying "forget about apps on the web". If we can't even have simple things such as interface sounds to help give notice on events changing in the background, then what's the point?
There is a separate blog post detailing impact on WebRTC services here:
My tool was broken by this. It's worth noting that the site never even plays sound, only uses an AudioContext to show a volume meter so the user can check their microphone levels.
I think, given that the product manager ( claims that they didn't intend to break all these projects - and given that such a huge range of sites is affected, including many of Google's own pages - it would be appropriate to recall this version of Chrome while a more permanent solution is discussed. 

My reasoning:
1) On the one hand, the clear lesson from this debacle should be that developers need to be be consulted properly (rather than just informed via an edit to a single blog post) before shipping a change to Chrome that breaks websites. 

2) But 1) implies you can't just rapidly ship a fix, since you'll risk doing even more harm if you do not consult people properly about the fix.

3) Yet, you can't possibly leave this bug in place for much longer. You are doing real harm every day to people's businesses, including Google, harm to students and harm to artists and game designers, not to mention the giant pile of cultural work that is mangled by this change. Forcing people to wait while you decide what to do would be wildly unethical.

4) The only ethical option right now is to revert the change. You should do that immediately.
While I don't have a link to share I run a small (few thousand) network of kiosk devices that I can't upgrade to the latest chrome else our companies app will cease to work. We've had to hold back chrome in the past due to regressions in Linux video drivers but this is a new precedent.
The interactive documentary "Criers of medellín" has its intro broken also 
The web sample editor is broken too.
Current Condition, a non-interactive art piece, loses sound in the current version of Chrome.

Momo Pixel's Hair Nah — a web-based game featured in Vice, Rolling Stone, The Cut, Teen Vogue, Fast Company, Slate, CNN, Mashable, Newsweek, Polygon, Fader, Essence, Mic, and Allure, among many other publications — no longer has audio in Chrome.
Any site that used audio as complementary to the intentionally-interactive experience. For example, Maine Office of Tourism make the "Maine Quarterly," which are interactive editorials/experiences. Here's the most recent one:

The web is something people experience in more ways than just visiting crappy media outlets that autoplay video and this change make little logical sense regardless.
Also broke :

Sound does not play on homepage

Comment 89 Deleted

This has broken notification alerts for the chat tool Flowdock:

Adding a UI element to engage with to enable sound doesn't make sense since the notification alert is based on other people's actions, not the current client's. In other words, there is no "play" button.

This is crippling for any chat tool that isn't on the whitelist of sites that get autoplay enabled by default.

This musically-augmented article from Google which is featured at the bottom of my New Tab page today is partly broken by the current autoplay policy:

I say "partly" because the sound is only blocked as long as I scroll through the page with the wheel on my mouse (I presume that's also the case for scrolling through with a touchpad, though I don't have one on the desktop machine I'm currently sitting at).

However it appears that a key press or a mouse click anywhere on the page is enough to allow subsequent playback to start, and that includes all of the ways I more commonly scroll through webpages (especially long ones like this) - with the keyboard, the scrollbar, or with a middle-button 'autoscroll' that is provided by an extension that I installed.

The most common place I encounter unwanted autoplaying video is on news websites. How long before the inconsiderate ones adapt to use the same trick as this Google article and retry their audio-starting attempts on every click or key press, thus catching a proportion of users like me who were only trying to scroll down the page anyway? For me, the situation is not much different from where we started, except that we're now all poorer for the good stuff being caught in this crossfire.
I'm not affiliated with them, but I was looking at and they have a background autoplaying video that doesn't load under chrome. It plays no sound, but this change broke the site nonetheless. 
You really should change the behavoir. I own a site with lots of html5games. All of the old games are broken, especially where developers will not fix it anymore. Which is the case in 85% of the time. You should change the audio policy to work for html5games. First you kill flash than you lower the quality of html5games with that. How should a site ever compete with the games in appstore nowadays?. Change the behavior so that the sounds will start when a user interacts with the page afterwards. Instead of killing it completely. Mute it to 0 or so and if a user interacts "clicks" within the game the sound is back. Or something similar. But it can't be that 1000 of games need updates and fixes because of your changes. Change it in a way that they haven't to be re-coded all!

It's really unsatisfying. First kill flash than use a standard which breaks the user experience of the games. What happens if you kill a Javascript function in the future which is used by many html5games frameworks. Will these games be broken than. Very bad company policy. More and more unsatisfied with Chrome and Google.
One more thing: Almost every game has an mute/unmute button on the game itself. But you don't even interact to that. If it would resume after you clicked it twice or something like that. But the behavoir is: If you didn't click anything before the audio is loaded you can never resume it with simply clicking.
> Can you please provide more concrete links to sites that are affected by the recent audio automuting policy change? Thanks.

Although this is just demo content, it's representative of games use-cases:

the whole experience is muted.
Guys, it's been a week. Can you please provide some kind of update on how and when you're planning to address this problem? 
Seconding the call for an update. I'm really hoping that breaking critical components of a huge swathe of web art and history is not going to stand, as I'm reasonably sure that wasn't the team's intent.
My recommendation would be to allow the user to click the speaker icon in the tab (which is crossed out by default) then the audio resumes when they click it.
Another case that may have slipped under the radar: gamepad controlled games never have a user gesture, since they rely on a polling API rather than firing events. This means gamepad-controlled games are now permanently muted even when the user interacts with them via the gamepad. You have to reach for the keyboard and press a key to unmute it, making it an awkward experience to play web games via gamepad.

I also filed this over 3 years ago for gamepad-controlled Android games in issue 454849, which still does not appear to be resolved. Now it applies to desktop too. It appears the gamepadconnected event may count as a user gesture in future, but I have never seen anybody even consider this point, and it's another event that must be specially-coded in to ensure audio playback is unmited, so it seems a particularly easy one to miss.
Further to my previous comment, it appears these are the events we should attach to in order to unmute audio at the first opportunity:

pointerup, touchend, click, keydown, gamepadconnected

I am reasonably confident there are more, but it's not clear which they are. I have not seen any documentation or guidance attempting to answer this question. The list also has changed over time, e.g. touchstart used to work, but no longer does; it only works in touchend now. This confusion over when exactly web pages have permission to unmute audio deepens the problem by making it harder to completely work around, and easy to miss cases such as the proposed gamepadconnected event. Having just one event to listen to would be more reasonable (but still break a ton of existing content), but I think having to listen for an undocumented list of magic events is an unreasonable requirement and further demonstrates this hasn't been well thought out.
According to, there is a new user gesture rule that counts when "any connected gamepad has a button more than 75% pressed" which can only be checked by regular polling. I think this shows how bizarre and complicated the rules around this can get!
wait for action from google.

its affect to my project. education interactive content for school. use html5 canvas and js.
Thank you everyone for the examples, they were helpful to our investigation.

We've updated Chrome 66 to temporarily remove the autoplay policy for the Web Audio API. This change does not affect most media playback on the web, as the autoplay policy will remain in effect for <video> and <audio>.

We’re doing this to give Web Audio API developers (e.g. gaming, audio applications, some RTC features) more time to update their code. The team here is working hard to improve things for users and developers, but in this case we didn’t do a good job of communicating the impact of the new autoplay policy to developers using the Web Audio API.

The policy will be re-applied to the Web Audio API in Chrome 70 (October). Developers should update their code based on the recommendations at:

This report was originally filed with a user interface suggestion for controlling autoplay. As others have pointed out, this is a non-trivial user interface challenge with a lot of nuances. We are still exploring options to enable great audio experiences for users, and we will post more detailed thoughts on that topic here later.
I appreciate that the Chrome team is taking this issue seriously and taking some action in response to the community's reaction to this change.

But simply delaying the enacting of this policy doesn't solve any of the major concerns that have been raised.

Come October, any existing software which utilizes sound and which is not or cannot be any longer maintained will be broken.

Additionally, these changes are not in the spirit of a free and open web, as Google controls the formula which decides which sites will be affected and which will not.

The primary job of a web browser is to support web standards. As it stands, Chrome is changing itself to *not* support web standards across certain blurry and arbitrary lines.

I agree that what Chrome is trying to do will be welcomed by some subset of users, but whether this feature -- which is going to break some of the most creative, interesting, fun and exciting parts of the web -- is enabled should be a choice a user makes knowing the drawbacks and benefits.

I would suggest *not* enabling this policy by default, and allowing a user to enable it by choice in Chrome's settings.

If Google is worried about what the adoption rate would be (since clearly users enabling this feature would help Google financially) I would suggest having some introduction to the feature upon loading up Chrome 70 (or whichever version) for the first time that says, "Welcome, Chrome has a new feature which will block auto-playing videos and sound. It may also impact legacy websites. Would you like to enable the feature?" Or something along those lines.
Either make it silent until the first user interaction 
OR show a prompt explaining the user what's going on.

Keep it the way it is is just nonsense and you guys know it.

And all websites must follow the same rule. 
ALL of them, otherwise it's not a standard, right?
I also appreciate the revert, that was definitely the decent thing to do in the short term.

Unfortunately, the great majority of existing work will not be updated by October, or ever, and so we still face the effective cultural erasure of those works in October. You guys definitely have the power to break everyone's work, should you wish to exercise that power, but you do not have the power to make people add workarounds to code that they are not able to alter (for all the various reasons that have been given here). Nobody has that power.

"We are still exploring options to enable great audio experiences for users" does not read as a promise to do the right thing. If you are sincere in your claim that the side effects of the policy were unintended and unwanted, you should commit - in clear, straightforward language - to finding other alternatives which do not break vast swathes of cultural work that was developed and distributed on the open web.

Comment 107 Deleted

Whatever solution is arrived at for the October update, I strongly believe that it MUST:
-Provide the user some indication that the site is attempting audio playback
-Provide the user with the option to temporarily or permanently enable audio on that website
-Discard the double-standard media whitelist system where popular sites get to play by different rules

Otherwise, the pending update in October will once again irreversibly break well-behaved audio sites in a way that the end user will be unable to correct.

The fact is, most sites affected will not be updated on Google's say-so, and perhaps will never be updated at all. This may be because they're very old projects, or because the creators weren't aware of the change, or because those creators don't have the know-how to implement the needed fix. As such, you MUST give the user a way to override the policy and continue to enjoy "non-compliant" audio sites.

I'm glad you decided to roll back the WebAudio change. I think that given the primary target of the change was autoplaying video, you can afford to delay any changes to WebAudio while you seek the best solution.

It is dismaying however that, if I am reading the comment above right, your current plan is to re-instate the Chrome 66 policy rather than seeking a less disruptive policy. I believe Chrome *could* find a policy which accommodates developers while still protecting the principle users should explicitly authorize websites to play sound. The Chrome 66 policy on the other hand nearly seems designed to maximize disruption to both future and legacy software. Please reconsider. The delay you have announced is a great opportunity to get things right this time.

Above you say: "Developers should update their code based on the recommendations at:" Honestly, in my case I am not very likely to do this. I am much more likely to alter my page to redirect traffic to a warning page explaining that due to changes to Chrome my games require Firefox. This is because a browser-gate/redirect can be done by <script>-including one file, and accommodating the Chrome 66 policy means bespoke patching and recompiling seven separate web apps.

Fundamentally, delay or no, Chrome has decided they're going to implement AudioContext in a nonstandard way, and is now requiring developers to contort their architecture in order to get any sound at all. That is still not reasonable.
One more thing, you point us back to the page, but this page is *still* wildly inadequate:

- The documentation treats <audio> elements and WebAudio as strictly separate concerns. This is not really the case, they can interact. For clarity, you should explicitly document the behavior of audio objects (eg new Audio()) and createMediaElementSource wrt autoplay.

- You should explicitly document what conditions enable use of WebAudio. The document says "user interacts with the page (e.g., user clicked a button)" and "the document received a user gesture". This is vague. Give us a list of qualifying events. Also, does "alert()"/"prompt()" count as interaction? If not, why not?

- The document near the top says: "Autoplay with sound is allowed if: User has interacted with the domain (click, tap, etc.)." What does this mean? In my testing with Chrome 66, it appears it means that if you click a link from one page on a site to another, autoplay permissions are granted until the tab closes and then are lost on the next visit. This seems to kind of suck as a policy (just because I clicked a link on doesn't mean I want CNN to play a video) but you need to clearly document it. Does this magic work with anchor links? Does it work with javascript: links? If I trigger Javascript from a click to trigger location.reload() do I get to keep autoplay permissions when the page reloads?

...also I have to ask, at some point, are you going to proofread this page? Bits like "you’ll have to call resume() later when user interacts with the page" and "only when user interacts wit the page" are not valid grammar. This is a really minor point, but it does not make me feel confident that the Chrome policy is a product of carefully-considered thought when the public messaging about it contains glaring spelling errors.

I continue to think the Chrome should not be making this change, but *if* you are going to make a complex change like this it is reasonable to expect clear, useful documentation and developer tools (why can't I either query or change "autoplay permission" status from the Chrome Developer Tools?)
re #103:
Thanks for the relatively prompt revert. It helps a lot and hopefully a small silver lining here is that the temporary breakage alerted some developers to this change that would have otherwise not known about it until the breakage became permanent.

On to the negative part:
You state correctly that UI for controlling autoplay is "a non-trivial user-interface challenge with a lot of nuances", and say that you are still exploring options. Given that, the issue would still clearly seem to be unresolved, yet this revert is "temporary" and you have already determined when webaudio will break again, in October with release 70. If the issue is not resolved why is the revert not indefinite until the UX issues are resolved?

We all know and believe that the people behind the Blink audio stack, and Chrome as a product overall, are working hard to improve things for users and developers. We appreciate your apology for not communicating well about the new autoplay policy. However, for that apology to mean anything and for the intent of the team to come through, you need to improve on that, and right now the essence of the team's communication comes across as 'oopsies! we'll break it in october instead', which can't possibly be your intent?

At present this feels like a repeat of when Flash and Unity (separate events) were each shot in the head - for reasons well understood by everyone, but at dates set via some opaque process internally involving unclear stakeholders, and the dates were set in stone by the time it was possible for any end-user or even Blink developer outside of the inner sanctum to do anything about it. In both cases no good alternative was truly ready to replace the deprecated runtimes, and right now there is still no good solution ready for the problems introduced by this autoplay policy change. Is there at least a plan in place to introduce a comprehensive solution by October? If not, will the autoplay policy be deferred past October if the problem is not solved?

Many elements of the autoplay changes rolled out in 66 are still confusing and strange even after the revert, so if all of those are left intact when Web Audio breaks again in 70, it will still be a mess even if many games/apps have been updated. The MEI system is deeply confusing, the hard-coded whitelist feels unfair and arbitrary, and the actual process to debug failures around autoplay is a nightmare. It's confusing for both end-users and developers. Temporarily reverting the change is helpful but those issues need to be addressed by the time you break autoplay again as well.

Ultimately, this change needs a postmortem process, with the result disclosed (at least partially) to the public so that end-users and developers understand how this happened and know the general details of how it will be addressed in the future, because it's not the first time a project management failure of this magnitude has occurred in Chrome (or in Chrome's audio support, in particular).
Can we please get some dialog from the Chrome team?

#103 above says "we reverted but will reapply exactly as is"

Why are none of the alternatives considered? #1 above points out that you could solve the entire thing by just muting audio until the first user gesture. That would mean no need for most apps to do anything or change anything.

Why is that not a possibility instead of just reenabling the restrictions in a few months? That seems like a great option as existing content mostly just works.

If that's not good enough, why not a prompt that allows the user to enable autoplay for the current site?

Comment 113 Deleted


Comment 115 Deleted

Comment 116 Deleted

Comment 117 Deleted

Also, we've mainly been discussing how the implementation should have been done, however I've came across this related issue that arrises from whitelisting some websites that don't have to deal with this issue. This is completely anti Net Neutrality

Comment 119 Deleted

#103 is not proposing any fix or solution, except to delay the issue by a few months.

I agree with comment #112 — can we please hear from the Chrome development team, and open a dialogue to work toward a better solution? Currently it feels like the issue is being triaged by a PR/marketing team with no care for the scope of the problem.
I appreciate that you undid the change once you realized the damage it was causing, but it's disappointing that the solution to this "non-trivial user-interface challenge with a lot of nuances" still seems to be: thousands of developers who have been using the WebAudio API spec for years should change their code. I hope you really consider alternative solutions that don't break the web.

Here's a great section on the Google Developers blog itself about why you shouldn't break the web:
And that's from a situation where the developers whose code broke actually did something non-standard.

As it stands now, if someone develops a site *today* simply following the WebAudio API specs, Chrome will probably still break their site in October, unless they did one of two things:

1. Got lucky and happened to wire up the start events in a very specific way.
2. Got lucky and managed to find and correctly interpret a document that seems to only exist as a subsection of a 2017 blog post and is missing a lot of details. (e.g. what exactly does Chrome consider a "user gesture"? I read the page up and down and couldn't find a clear definition).

It's great that you acknowledge that you haven't communicated the policy well thus far, but you still haven't, so if you really must go forward with this web-breaking implementation (you shouldn't), please do take some steps to communicate this change to the wider development community that go beyond an edit to an old blog post.

johnpallett@: This is totally bananas. You've not solved any of the problems, or even suggested how they might be solved. You also misunderstood my original report. I did not ask for any UI changes. I appreciate the revert, but this only provides some extra time until you do exactly the same thing, with largely the same consequences.

Here is a list of problems with what's happened and still aren't solved:

Problem #1: there is no point requiring code changes to unmute AudioContext. The browser could automatically unmute in the first user gesture. This does not enable any abuse and would probably have avoided the majority of the breakage. I explained this in my blog post that I linked to in the original report. I appreciate that this is a long and potentially confusing thread, so to emphasise this approach that you appear to have misunderstood and discuss it independently, I've filed issue 843555.

Problem #2: there is an undocumented, non-obvious, changing list of magic events that you need to listen to in order to reliably unmute audio. I've posted a suggestion for a "usergesture" event here to solve this:

Problem #3: it permanently requires all web apps using audio to implement a queuing system in case they try to play audio before the first user gesture. Combined with #2, this significantly raises the complexity required to develop reliable audio playback on the web, even when building new content with the restrictions in mind.

Problem #4: some input types, e.g. gamepad, don't yet count as user gestures. These can never be used to unmute audio. See issue 454849.

Problem #5: allowing autoplay for the top 1000 sites according to Media Engagement Index is an arbitrary limit. If the web doubles in size, will you double it to the top 2000 sites? Probably not. It unfairly advantages larger sites - and Google itself - and disadvantages smaller independent publishers. The fact there is an exception for the top N sites appears to imply there is value in allowing autoplay, or that the changes are indeed expected (or even intended?) to break websites and Google have decided to minimise this by only breaking smaller sites. This would be a very worrying attitude.

Problem #6: rather than trying to solve the above problems, Google is just going to wait a bit longer, then cause them again. It does not appear that Google is taking this feedback seriously yet. Google has a track record of ignoring other reports about this (e.g.  issue 280161 , issue 454849), has just misunderstood the original proposal, and still does not appear to be taking it seriously. When everything gets broken again later this year, will we have to go through this all again? Or will someone actually pay attention?

Problem #7: the fact UI changes are non-trivial has been brought up apparently as an excuse to avoid changing this. If it does require UI changes, shouldn't Google take the time to get this right before making the change, rather than breaking websites and telling web developers to deal with it? Is it that nobody can be bothered to do the necessary work so you're breaking websites as a shortcut? Has Google become that irresponsible with web compatibility now?

I invite a Chrome product manager to separately address each of the seven problems listed above.
I just wanted to add, because the rollback was not applied to <audio> tags, many WebAudio projects are still broken. Using <audio> is a standard way to stream audio tracks (MP3, OGG, etc) with WebAudio, so this rollback has only fixed a fraction of all sites using WebAudio.

Some examples of my own WebAudio projects which are still broken after the rollback:!/experience
Anyone have some data that substantiates the theory that users want muted videos to autoplay by default (rather than, say, on a per-domain basis like cookie retention and "Clear cookies on exit" settings)?

It's entirely anecdotal (that's why I'm asking for data…), but everyone I know can't stand muted autoplay. Google didn't provide any reasoning or substantiation why "Muted autoplay is always allowed" in Advertisers and consumer-facing sites abuse that privilege by autoplaying muted unrelated videos.

Clearly some users want autoplay for some domains. My question is whether at least a meaningful percentage of users would prefer to disable autoplay entirely, even for muted videos, at least for some domains.

Related discussion:
we still have issues with our ebook application which uses some basic games templates with audio :-( We have 18 of 34 templates where the audio fails to autoplay and then disables some of the buttons. We have developers looking at the issue of our code but we would ask for the audio changes to be rolled back too please.
Would it be possible to at least inform the user what is happening when autoplay audio is blocked, and give the user the ability to override the block? 
Safari on iOS seems to have the same behaviour as the latest changes in chrome, except for one thing: the AudioContext automatically resumes when any connected AudioBufferSourceNode starts playing from a user gesture.

I believe implementing this same system would solve most issues.
The solutions proposed in issue 843555 sound good as well.
Using the Safari iOS approach will have similar problems on desktop, because it still requires code changes. The mobile restrictions have been around pretty much since the implementation of these APIs so everyone just coded around them. The problem now is nobody did that for desktop since it always just worked, so introducing new restrictions will break a large amount of existing content.
I develop a commercial application, affected by this change. In anticipation of the release of Chrome 66 we had to change to having a start-up screen to workaround the new policy.

A general problem seems to be that browser policy is leaking into APIs and application design, burdening developers with a need to be intricately aware of corners of the policy. A real solution would separate policy from API.

It seems far better design to mute the tab but leave the web page seeing success as it makes API calls. Then prompt the user "this web page wants to play audio" allowing the tab to be unmuted (unseen by the web page)

This is easy to explain, with less API complexity and fewer states to deal with. It would eliminate the problem of the complex ordering of events that results when permissions checks are interleaved with everything else (see points 1,2,3 from comment #124)

The browser would then have an ongoing freedom to refine and adjust the policy where necessary at any time in the future.

In its current state the policy ends up intertwined into web application code. That's effectively a deadlock to future changes when along comes a new use case or a new category of device.

One corner that illustrates this: I could not create an AudioContext without a user gesture; yet on its own an AudioContext does not make sound. This proposal side-steps the delibrerating of arbitary choices like these on a case-by-case basis. It could also be a compelling design for microphone input, too.

Ultimately the end user only cares to stop antisocial audio coming from their speakers, so stop the audio at that point.

Is muted audio context going to be allowed?
comment #125 [0] is still broken because it uses <audio> element instead of Web Audio API.


Comment 134 by, May 21 (6 days ago)

I work on a website, (embeds for 3rd party services such as YouTube and SoundCloud, both with autoplay permission delegated) that allows people to share music and listen collaboratively. We were also affected by this change.

Personally I'd like to have another permission we can request (like notifications) OR unmute on first user interaction as suggested many times before. The way blocking autoplay is currently implemented does not work for me.

My personal website is also broken (using <audio> elements).

Comment 135 by, May 23 (5 days ago)

Just to reiterate, where can we get some official dialog from the Chrome team?

I think many of the chat apps have a very valid use case that is not addressed here. They need to be able to play a sound without a user gesture. Requiring a user gesture to start is not an option since they will often get launched in the background when chrome restarts. So they'll sit there, wanting to notify the user but with no way to notify them. Note: The user should not be required to turn on notifications (the Notifications API) as notifications have other issues (for example the message might be sensitive and the user does not want it to appear on their screen until they are sure no one is looking)

It really does seem like a permission dialog is a better solution. Then the person using the chat page can get asked "can we play sounds y/n" the first time they visit and subsequent times will just work even it lunched when tabs are recovered on chrome launch.

Comment 136 by, May 25 (2 days ago)

Comment # 135 has stated exactly the problem we face now. In our case, our WebRtc app waits for the incoming calls. If a call arrives, the app should play a ringtone. Mr. Google, your policy change has fundamentally broken the incoming call sound alarm mechanisms on the Android Chrome. After we "Add to Home screen", the <audio> still does not play. That has failed to fullfil your own design goals.

Please, someone, if you know a workaround, leave a message here.

Comment 137 by, May 26 (2 days ago)

The workaround for now is to use the WebAudio API directly. There is no other way unless an official clarification is made. 

Comment 138 by, Yesterday (34 hours ago)

This exception looks cold and harsh. Mr. Google, please have some tender pity for those visually impaired people. They must hear some short prompts delivered by the "play()" before they can start to interact with the document. Now, Chrome refuses to play before the interaction, and they cannot interact before the play. What a vicious cycle!

Uncaught (in promise) DOMException: play() failed because the user didn't interact with the document first.

I suggest you drop the entire autoplay policy concept. I propose something different. Since there are always good websites and abusive/malicious websites, the Google search engine should introduce sort of rating systems to blacklist those websites as soon as they are proven to be autoplay abusive. Just as we, as individuals, value our personal credit history, the websites should do so too. Banning a bad website from Google search results forever would be a very strong deterrence against bad behaviors. The approach would be beneficial to Google too because that will further strengthen the user loyalty to your search engine.

I am not suggesting you rate the websites politically, socially, etc. You rate them only if they misbehave technically, for example, abuse autoplay, full screen hijack, etc.
Showing comments 39 - 138 of 138 Older

Sign in to add a comment