Cannot play sound through multiple headphones |
|||||
Issue descriptionChrome Version: 71.0.3578.94 (Official Build) (64-bit) Chrome OS Version: 11151.59.0 (Official Build) stable-channel nocturne Chrome OS Platform: Pixel Slate Steps To Reproduce: (1) Plug in two USB-C headphone jacks, each with headphones attached. Expected Result: Sound is output through both pairs of headphones. Actual Result: Sound can only be directed to a single output. How frequently does this problem reproduce? (Always, sometimes, hard to reproduce?) Always What is the impact to the user, and is there a workaround? If so, what is it? Two users cannot share a Chromebook to watch a movie on a plane. This is very frustrating since the hardware affords that it ought to work, and Chromebooks are otherwise great travel devices. (It's especially embarrassing if you crawl over someone in an aisle seat to get out the Chromebook and headphones, and then it doesn't work.)
,
Jan 7
Hi Brentons, thanks for the feedback. CRAS server supports dBus API AddActiveOutputNode. This function is supported on the system, but the selection is not exposed to UI. It is currently only used by Chrome for meeting to enable multiple USB speakers. Hi Jenny, do you think we should add this feature for general use case? The use case Brenton mentioned is useful sometimes. There should be some UI changes: 1. Let user select active output nodes. 2. Let user adjust volume of node. An issue is that not all nodes can work independently. For example, on some board, speaker and headphone can not be used at the same time. To rule out this kind of issue, we should limit the support to different sound cards. That is to say, if two nodes are on two different sound cards, they can be selected at the same time. How do you think ? Thanks!
,
Jan 7
,
Jan 7
I don't know what a soundcard is in this context, but a reasonable MVP would be to support multiple pairs of headphones, but not mixing headphones/speakers. I'm not sure if a user would ever want to play through both device speakers and Bluetooth/aux cable speakers. I have a hard time imagining headphones + speakers being a useful combo. This approach seems like an acceptable UX: - Aux (headphones) - Node A - Node B - Device speakers - Node A - Bluetooth - Node A - Node B A user can modally choose between aux, device, and Bluetooth. Any nodes that fall under that category would receive sound. This would cover the most useful case (sharing a device in a private setting like a plane) very well.
,
Jan 7
FYI, development of this feature will be expensive. We should avoid spending much time on it until it has been prioritized above other issues.
,
Jan 8
cychiang has mentioned the audio server already supports multiple outputs, so it sounds like this is primarily a presentation issue. Would it sufficiently reduce scope to split this into two tasks: 1. Unify the aux nodes in the output picker, and send audio to all aux nodes when that output is selected. 2. Add a UI to change the volume of each aux node independently. That might allow #1 to solve this immediate issue without being gated on the complexity of #2.
,
Jan 8
How would the aux nodes be kept in sync to prevent echo? How would the echo canceller work with two echo paths to cancel? The only thing that uses multiple concurrent outputs is GVC, that works because we control a list of approved devices and that list is only a few devices that aren't allowed to be mixed on the same device.
,
Jan 8
I don't pretend to know this space as well as you do, but I suspect someone who plugs multiple aux jacks into a Chromebook is a lot more likely to be trying to share media across multiple sets of headphones than to be video conferencing with multiple sets of external speakers. Chromebooks are already fantastic travel devices. As aux-to-USB-C adaptors become popular (e.g. the past 2 generations of phones) and Chromebooks ship with multiple USB-C jacks, this is becoming a pretty obvious affordance, which makes it really frustrating/disappointing as a user to discover it's unsupported. I don't intend to be stubborn - I just want to ensure we don't let perfect doesn't be the enemy of good.
,
Jan 8
Sorry, I'm not trying to be too negative. I think it will be a nice feature to support. I want to make sure we prioritize it against the other features asking for our time this quarter and next. cychiang, can you add this to our Q2 backlog? We will need some new UI too so we'll have to make sure that anything we decide fits on their schedule too.
,
Jan 8
assign to conrad to help prioritize.
,
Jan 8
Re#7. Hi Dylan, totally agree we should prioritize it before making any change. Let me summarize the concern, current status, and possible efforts. 1. We rely on rate estimator to sync two output nodes. It works well on two USB speakers of the same Jabra model. But we have not tested it thoroughly on two speakers of different models. This will be the efforts to be made. 2. Echo cancellation in CRAS / Webrtc only support one echo reference. It can not handle two echo sources. This is actually not limited to multiple output cases. For example, when a set of speakers has left and right speaker separately for left and right channel, they are two audio sources to microphone. Echo cancellation will be degraded in such case. Even if the latency of the two speakers are synced perfectly, the distances from two speakers to microphone are still different. 3. The echo will not be a problem user uses two headphones. But we can not tell whether a USB audio device is a headset or a speaker so it will be hard to limit this option based on audio device type. 4. Playing sound to multiple output is already supported through javascript API when we add pinned stream support. User can pin two streams to two different devices. Even if we don't want user to use multiple outputs because of all kinds of possible issues like latency and echo cancellation, it is already available if user writes some javascript codes. e.g. open two tabs of https://webrtc.github.io/samples/src/content/devices/input-output/ and select different output nodes. If we want to do this, the main efforts on system layer seems to be testing how good we can sync two different outputs. Thanks!
,
Jan 8
Agree with dgreid on #5. We should get the feature defined correct first, and be careful with all the side effects for enabling this. It will make the logic for maintaining active audio devices complex, which has already been quite complex with many special cases for handling different edge cases. |
|||||
►
Sign in to add a comment |
|||||
Comment 1 by brentons@google.com
, Jan 7