MCVD currently creates new chrome GL Texture objects per frame, and attaches them to a VideoFrame in the GPU process. The frames are decoded on the mojo thread, but the Texture object creation must happen on the GPU main thread. so, the decoded frame hops mojo -> main -> mojo (to send the VideoFrame back to the renderer).
this incurs a large time penalty.
instead, we should create a pool of Texture objects on the main thread, and attach the decoded frame to them on the mojo thread. luckily, the operations that we need can be done safely on the mojo thread -- we just need to tell CodecImage about the MediaCodec buffer that holds the current frame. When the resulting VideoFrame is destroyed, we will take the Texture object and put it back into the pool.
while it's not directly related to v1 of this work, there are two other things to keep in mind:
- D3D11VideoDecoder, and probably other mojo video decoders, will want to use this.
- the Texture object is basically an implementation detail of the chrome's validating OpenGL decoder implementation. there is a new implementation, the passthrough GL decoder, which doesn't have a Texture. After discussion with the graphics team, it turns out that we'll want some new abstraction that's shared by both GL command decoders. the pool will hold that new type, rather than Texture, though probably not in v1 of this work.
Comment 1 by liber...@chromium.org
, Mar 7 2018