Consider using an actual Mojo interface for message ports |
|
Issue descriptionToday I found out that the message port code serializes messages onto the message pipe instead of using the generated bindings: https://chromium-review.googlesource.com/c/590082/5/content/common/message_port.cc From talking to mek@, my understanding is that this is because mojo interfaces are unidirectional (only allowing requests in one direction, and callbacks in the reverse), while message ports would like bidirectional communication. To work around that, message ports create a raw message pipe, and directly read/write wire bytes. I wanted to see if it's feasible to use an associated interface on each endpoint: using an associated interface would preserve the one message pipe situation that we have, and allow message ports to use the more familiar generating bindings. However, one thing I'm not sure about is if this will make it harder to reduce the number of copies.
,
Dec 19 2017
Also worth noting for context that this is especially relevant now because we want to take advantage of lazy serialization for MessagePort messages, as they may need to carry more than just already-serialized byte data.
,
Dec 19 2017
WRT reducing copies. We could use mojo.common.mojom.ReadOnlyBuffer. It is typemapped to base::span<const uint8_t>. At the receiver side, the span points directly into the received message. I think that could be even less copies depending on how we use the buffer. I see your point about one side needing to queue outgoing messages until it receives the SendPeer calls. I wonder how bad it is to use two messages pipes for a message port? That should solve the problem. We pass two message pipe handles (one for send; the other for receive) when setting up the message port. That should solve the problem. Introducing BidirectionalInterfacePtr is an interesting idea. I would think that one use case may not justify such a new feature, though. Maybe we have some other use cases also require that. I would be happy to learn about.
,
Dec 20 2017
If went with two message pipes for a message port, would we be able to add a response to Receive(...) and use that response for error handling on the sending end?
,
Dec 20 2017
Regarding reducing copies, on the blink side (which is really the only side that matters) MessagePort (and other postMessage) code already uses a struct that is typemapped to have minimal copies to/from SerializedScriptValue.
BidirectionalInterfacePtr would work, but would be effectively what we have now anyway, and as long as this is the only user of it I'm not sure what the benefit of it would be. I imagined a two-pipe MessagePort implementation to be something like:
interface MessagePort {
PostMessage(blink.mojom.TransferableMessage message);
}
struct MessageChannel {
MessagePort outgoing;
MessagePort& incoming;
}
and update TransferableMessage to represent ports as MessageChannel rather than handle<message_pipe>.
No need for any SendPeer like thing or associated interfaces as the two directions of the channel can be independent anyway. We just have to always transfer two pipes when transferring a port, but that's not the end of the world.
,
Dec 20 2017
Hmm, what problem is being solved? Is this about further code simplification?
,
Dec 20 2017
Reply to #4: Yes. That is doable.
,
Dec 20 2017
darin@ -- I'm not sure about the original motivation for this bug, but it's getting a new look because we'd like to use lazy serialization for passing SABs over message channels.
,
Dec 20 2017
Reply to #6: IIUC, code simplification was the motivation of this proposal. Using Mojo interfaces avoids manual serialization as well as watching message pipe readability. But the gain may not justify the cost of using two message pipes for a single message port, or introducing a new Mojo bindings feature.
,
Dec 20 2017
Re #6: my original motivation for this bug was from a security perspective: I'd like to minimize the amount of code that manually serializes/deserializes data for IPC.
,
Dec 20 2017
Re #8, using lazy serialization for message ports seems more or less unrelated to this? All you'd need is some way to get the (bindings generated) TransferableMessage::SerializeAsMessage and DeserializeFromMessage to deal with lazy serialization. And that should be fairly simple.
,
Dec 21 2017
re #11, SerializeAsMessage is not suitable for this as-is: it returns a new Message object. A custom lazy serialization routine must modify the existing message object if asked to serialize its context. The best we could do with SerializeAsMessage is to serialize-and-copy, and that is not great. While we could also hack on SerializeAsMessage to make it more suitable, I think we should just find a way to use bindings instead. re others: treating a MessagePort as two pipes is OK, it's just kind of sad since message pipes are already a pretty close analog to MessageChannels. It feels silly to treat them as unidirectional here despite the model we've established for bindings so far. I don't think a bidi primitive is that crazy or complex to implement even if it's of limited use, but I would settle for a pair of pipes if people are strongly opposed to the bidi thing.
,
Dec 21 2017
https://chromium-review.googlesource.com/c/chromium/src/+/840606 is something like what I was thinking of re lazy serializiation. It seems to work, but is admittedly a bit hacky (but could be cleaned up, and seems like a fairly non-intrusive way to achieve lazy serialization for MessagePort without blocking on any kind of major refactor like this bug proposes). I'm not against adding a bidi primitive to mojo interface bindings, but it does seem like a very big hammer if there are no other use cases. I do think there is still plenty room for cleanup/simplification in the MessagePort code without introducing new primitives in the mojo bindings code.
,
Dec 21 2017
Re #13: that does the trick for my SAB use case
,
Dec 22 2017
Re #12: Actually at the very early days, the bindings supported two interfaces on the two directions of a single message pipe. Something like:
[Client=Bar] // Bar could be the same interface as Foo.
interface Foo {};
But at some point the team decided to remove it for simplicity. We could probably still find discussion threads on the old mailing list. And people spent some time to convert all the use cases of [Client=] to separate pipes.
Not saying that it shouldn't be reconsidered. Just think that we probably want some fairly compelling reasons to resurrect this feature.
,
Dec 22 2017
What Yuzhu said. The reason for removing it was motivated by jamesr@ implementing Go bindings, where modeling IPCs as f(x) was far more idiomatic. The "[Client=Bar]" syntax led to reasonable C++ interfaces, as we are very accustomed to classes having "client" interfaces associated with them. The bindings system treated this as a first class thing. I'm not sure we should re-introduce that. Btw, the reason for modeling a MessagePort as a MessagePipeHandle was also to help expose the lower level MessagePipeHandle primitive with a thin-veneer to the web platform. Imagine something like Mojo bindings on top of MessagePort. That shouldn't be crazy, but it does seem like an unfortunate thing if MessagePort becomes more heavy weight. I would make sure the motivation for changing MessagePort's implementation to something more involved is really worth it. It seems OK for MessagePort to be special given that it is a low level primitive for the web platform. Just my 2c.
,
Dec 22 2017
One thing I've been playing with/thinking about (which in some sense moves further awy from using actual mojo interfaces) is to get rid of blink::MessagePortChannel and its awkward interface, and instead use mojo::Connector directly from blink::MessagePort. The mojo::Connector interface is much easier to use and would let us get rid of hundreds of lines of fragile threading and low-level mojo code (https://chromium-review.googlesource.com/c/chromium/src/+/842856 just changes blink::MessagePort to no longer use MessagePortChannel, it doesn't do any of the follow up cleanup yet). That seems to basically get rid of all the subtleties caused by not having an actual mojo interface, and thus might also address the original concerns that prompted this bug. I can't think of much of that that would be simplified further by using an actual mojo interface. The only thing maybe that instead of using Accept() methods that take a mojo::Message and making a single call to convert that to/from blink::BlinkTransferableMessage, we would instead have Accept() methods that take a blink::BlinkTransferableMessage directly. And that seems like a pretty minor difference.
,
Jan 4 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/0a002495ffc9bb326b362fdedf93620dd8fc553e commit 0a002495ffc9bb326b362fdedf93620dd8fc553e Author: Marijn Kruisselbrink <mek@chromium.org> Date: Thu Jan 04 18:50:53 2018 Simplify MessagePort by using mojo::Connector. Bug: 750468 Change-Id: I4b6367418a8f11de8a6702cb137c02343be3b233 Reviewed-on: https://chromium-review.googlesource.com/842856 Commit-Queue: Marijn Kruisselbrink <mek@chromium.org> Reviewed-by: Daniel Cheng <dcheng@chromium.org> Reviewed-by: Jeremy Roman <jbroman@chromium.org> Cr-Commit-Position: refs/heads/master@{#527050} [modify] https://crrev.com/0a002495ffc9bb326b362fdedf93620dd8fc553e/third_party/WebKit/Source/core/messaging/MessagePort.cpp [modify] https://crrev.com/0a002495ffc9bb326b362fdedf93620dd8fc553e/third_party/WebKit/Source/core/messaging/MessagePort.h |
|
►
Sign in to add a comment |
|
Comment 1 by roc...@chromium.org
, Dec 19 2017A naive (mojom-only) implementation will increase the number of copies from 1 to 2 because we pass arrays by const ref. We can reduce it back down to 1 fairly easily by using a move-only typemapped type for the array data. What I am trying to figure out though is what the interface would look like. First a strawman: // ---- // typemapped as move-only struct MessageData { array<uint8> bytes; }; interface MessagePort { Receive(MessageData data, array<handle> handles); SendPeer(associated MessagePort peer); }; // ---- This technically works, but it requires one side to queue outgoing messages until it receives a SendPeer(), which is IMO not an acceptable compromise. My crazy idea would be to introduce (dun dun dun) a new bindings primitive: BidirectionalInterfacePtr. The semantics are that both ends of the pipe function as both a dispatcher and sender for messages on the same interface T. For simplicity we can constrain it to support only interfaces which have no messages that expect responses. Drawbacks here are that we either introduce (minimal) new mojom syntax for bidirectional interfaces and a new BidirectionalInterfacePtrInfo type, or we lose some type information a the bindings layer and the developer takes on slightly more responsibility for getting things right. If we went with this solution, I'd prefer to do it right and introduce the mojom syntax. WDYT? And also +yzshen@ and Malcom