[audioworklet] Consider using real-time priority thread for AudioWorklet operation |
|||
Issue descriptionWebAudio Rendering Thread for AudioWorklet uses "DISPLAY" priority thread and this causes audio glitches when other graphics tasks (outside of Chrome) dominate the system. Consider using the realtime priority thread for audio rendering purpose.
,
Feb 22 2018
,
Feb 23 2018
Why does this block issue 469639 ? AudioWorklet doesn't require real-time priority, even though that would be really nice.
,
Feb 23 2018
Okay, I'll remove this from blocking issues, but it is closely related. Marking this issue as 'ExternalDependency' because we need wider review and spec discussion.
,
Jun 24 2018
I understand the security concerns behind not enabling arbitrary JS code to run on a real-time priority thread. However, unless you want to condemn Chrome's implementation of the Web Audio API to an eternity of being the base for tech-demo toy apps, and nothing more, AudioWorklet isn't going to work without a real-time priority thread. To put it simply: serious audio processing REQUIRES a real-time thread (or a comparable equivalent). There is no workaround here: there is either one available and audio can be processed reliably, or there isn't, and you can forget about any non-trivial audio application. For the time being, would you consider allowing real-time priority AudioWorklets to be enabled through a flag? This should be associated with a Chrome-specific, vendor-prefixed API to query whether AudioWorklets can run on a real-time thread or not (because a serious app should refuse to even start without a real-time audio thread available, but at least it can be displayed to the user that the flag should be enabled).
,
Sep 18
To put it simply: There is an audio graph given by the WebAudio API. With this graph you can create "nodes" and connect them to each other in order to build a huge graph with a unique destination: speakers or buffers. So, no, you don't need a real time audio thread if you already have a fast library who handle it for you. However i agree to say that other native nodes have to be written down before calling this API perfect, obviously. Wavetable synthesis are missing for example. But i would never ask them a real-time gate for that, this is a dirty conception. Are you asking the exact same thing to Edge and Firefox? Cross-platform?
,
Sep 19
thomasto...@gmail.com : we are speaking of the "WebAudio Rendering Thread for AudioWorklet", so not "native" nodes which already running with RT priority.
,
Sep 19
thomasto... > To put it simply: There is an audio graph given by the WebAudio API. With this graph you can create "nodes" and connect them to each other in order to build a huge graph with a unique destination: speakers or buffers. I appreciate the clarification, but as the author of what I firmly believe is the single most sophisticated system on top of the Web Audio API by far, I am well aware of the concepts and implementation details (as well as *limitations*) behind the API. > So, no, you don't need a real time audio thread if you already have a fast library who handle it for you. I'm afraid you are missing the point. We are talking about custom DSP (audio/signal processing) inside an AudioWorkletProcessor instance. While you are indeed right that numerous algorithms can be broken down into subtrees of native-implemented AudioNode *engines*, this only accounts for a tiny fragment of all use-cases in signal/audio processing (primarily a limited set of math expressions).
,
Sep 19
thomasto... Additionally, setting the audio rendering thread's process priority to HIGH and the thread's priority to TIME_CRITICAL not only yields a massive stability improvement (less glitching) when your app is a complex production-grade system with non-trivial UI, but also increases the overall threshold range before glitching starts occurring.
,
Sep 19
jana.tb1992@ Let me chime in here. I am the implementator of AudioWorklet and the owner of Web Audio API. > However, unless you want to condemn Chrome's implementation of the Web Audio API to an eternity of being the base for tech-demo toy apps, and nothing more, AudioWorklet isn't going to work without a real-time priority thread. I have to disagree. There are feature-rich DAWs written with Web Audio API (and with AudioWorklet) in the market. Perhaps they are not up to your expectation, but they do have valid use cases and sizable user base. Discrediting them all does not sound fair to me. > To put it simply: serious audio processing REQUIRES a real-time thread (or a comparable equivalent). How do you differentiate "serious audio processing" from what is not? Furthermore, this all goes in vain if your device is not powerful enough for your purpose. Everything is relative and the argument is meaningless in a different context. Also there are platforms do not allow the application to use the high priority thread at all. How do you handle such case? > For the time being, would you consider allowing real-time priority AudioWorklets to be enabled through a flag? I gave thought about it, but my current answer is no. I think we might have a better path than putting it into the experimental bucket. Some ideas will be explored in the discussion of the next version of Web Audio API. > This should be associated with a Chrome-specific, vendor-prefixed API to query whether AudioWorklets can run on a real-time thread or not (because a serious app should refuse to even start without a real-time audio thread available, but at least it can be displayed to the user that the flag should be enabled). The idea of "vendor-prefixed" had spectacularly failed several years ago. I am very confident Chrome launching process will shut down the idea immediately as soon as it goes on the table. Also anything can be queried will be abused for the fingerprinting. I can already picture that some evil pages will use the aggressive code to mine bitcoins when the realtime thread is available. Any serious programmer knows the realtime priority thread is good for many things. Yes, it is good for many things including potentially harmful things. You need a lot more than "because of less glitches" to justify the adoption of such feature to the web platform. With that said, I value your input on this issue. We'll need more voices from the community (especially people with audio programming background) when we start working on this idea. |
|||
►
Sign in to add a comment |
|||
Comment 1 by l...@grame.fr
, Feb 20 2018