Switch media/cast/net/udp_transport_impl.h to using NetworkService's UDP socket API |
||||||||||||||
Issue descriptionmedia/cast/net/udp_transport_impl.h uses net::UDPSocket. As a part of network servicification, all interactions with //net needs to be done through mojo APIs. Network Service has a mojo UDP socket API (see //services/network/public/interfaces/udp_socket.mojom). media/cast/net/udp_transport_impl.h needs to be migrated to using it.
,
Jan 30 2018
Thanks! I have a CL out for review to add multicast options (https://chromium-review.googlesource.com/c/chromium/src/+/891447). Let me know if there are other features that cast needs.
,
Jan 31 2018
xjz: Does NetworkService's UDP socket API provide the semantics we require, in terms of low latency and packet dropping to avoid buffering?
,
Jan 31 2018
miu@: The API provides the capability to send/receive UDP packets with the UdpSocketPtr acquired from the Network Service. It sounds pretty much similar to what we need. When requests to send a UDP packet, the client will always get the callback after the sending completes (either successfully or failed). The UDP socket impl currently can buffer up to 32 requests, and after that, all requests fail immediately. Performance wise, the sending of each packet is initiated by a mojo message. Not sure how this may affect the performance comparing to using the data pipe as in our impl.
,
Jan 31 2018
If cast needs any tweaks (e.g. drop packets on the floor), I am happy to add those. Network Service is going to be the future soon, so we will need to use these mojo APIs to access networking functionalities from outside of //net. If there is any performance issue with this API, I will investigate and try to fix it. I am currently migrating //extensions over to this UDP API. Cast has a more complicated use case, so I filed this bug for it.
,
Jan 31 2018
We'll have to run any patch through our perf tests and lab tests to see if there is a regression, which will require staffing from our team. What is the timeline for this migration?
,
Jan 31 2018
+jam@: John, could you comment on #6?
,
Feb 1 2018
I took a quick look at the API with xjz@ earlier today. My main concern is the around the extra buffering (32 packets' worth) and that the "done callbacks" need to propagate back to the client before the next send can start. These are very likely to diminish the performance of our protocol because they will add significant latency to the feedback-path between sender and receiver (and back). This would affect packet re-prioritization, re-transmission, and drop heuristics. I'm wondering if the NetworkService could adopt the design xjz@ recently landed in the code base. Or, if it's too late to change the existing design/API, how about adding it alongside as a special "media streaming" API? The design in a nutshell: 1. The data flow is one-way, from mojo client to service, without any explicit feedback from the service back to the client: When the client wants to send a UDP packet, it tries to insert it into a mojo data pipe. If the pipe is too full, the client can reschedule the packet for a later attempt (or maybe decide never to send it again). 2. The data pipe itself is allocated a small as possible "but no smaller." Meaning, we try to minimize any buffering as much as possible. IIRC, we were thinking of the data pipe having only 3 typically-sized-packets' worth of capacity. (But, we will also need to test that "3" is the right amount.) 3. Service-side, the usual net::UDPSocket::SendTo() API is used: The impl pulls one packet at a time out of the data pipe and SendTo()'s it. When this async operation is complete, the next packet is pulled from the data pipe and is sent, and so on. FWIW, I believe the WebRTC folks would also benefit from this scheme, for similar "protocol" reasons. So, this wouldn't just be a special one-off for us. :)
,
Feb 1 2018
Thanks for the background. It's not too late to change the API-- I am happy to make necessary adjustments to support this use case. xjz@ sent over a pointer to the new design (https://chromium-review.googlesource.com/c/chromium/src/+/890302). I might be missing something. At a glance, I think the current udp_socket.mojom is equivalent to having two data pipes (a send data pipe and a receive data pipe). The kMaxPendingSendRequests = 32 is an arbitrary value. We can expose a configuration option for media streaming to buffer only N packets. It seems to me that the sending queue buffering N packets in the impl class (services/network/udp_socket.cc) is equivalent to a mojo data pipe buffering N packets. Caller can send up to N packets without needing to wait for "done callbacks." The impl class takes a packet from the send queue as soon as the previous packet is processed, without waiting for the callback to propagate back to the client. Let's talk more over GVC this afternoon and work out a solution :)
,
May 15 2018
,
May 18 2018
,
May 29 2018
xjz@: Any update? What's the timeline for media streaming to switch to components/mirroring/service/udp_socket_client.h?
,
May 29 2018
,
Jun 13 2018
This is currently blocking network service from going to canary in M69. We would like this done by end of June. We can discuss how to reprioritize things.
,
Jun 13 2018
Per Comment 14, I will start refactoring media::cast::UdpTransportImpl to use the new network mojo API.
,
Jun 13 2018
I chatted with jam@ and xjz@ offline. It's not necessary to restrict all networking access for Network Service's canary launch. Given that media/cast/net/ only needs net::UDPSocket and doesn't depend on any states in the networking stack, it will remain functional with Network Service's launch. xjz@ is working on the replacement which uses udp_socket.mojom. - Proj-Servicification-Canary label and lowering priority.
,
Jun 18 2018
,
Aug 22
,
Sep 4
,
Sep 5
The implementation for the Mirroring Service is completed and it is using the NetworkService's API. media::cast::UdpTransportImpl is not needed once the Mirroring Service is completely rolled out.
,
Sep 7
,
Oct 3
miu: could you update us on when the mirroring service is completely rolled out?
,
Oct 8
Mirroring Service is scheduled for M72, assuming other external dependencies fall in-place as scheduled.
,
Nov 26
|
||||||||||||||
►
Sign in to add a comment |
||||||||||||||
Comment 1 by x...@chromium.org
, Jan 29 2018Owner: x...@chromium.org
Status: Assigned (was: Untriaged)