Currently, WebVR per-frame rendering works something like this:
- Renderer process uses its command buffer GL interface to issue WebGL drawing commands to the GPU process
- GPU process executes drawing commands, canvas drawing destination is a framebuffer object (FBO)
- GPU process converts FBO to a texture mailbox + sync token
- Renderer process passes mailbox + sync token to Browser process
- Browser process uses its command buffer GL interface (in MailboxToSurfaceBridge) to send copy-to-Surface commands
- GPU process copies the texture referred to by the mailbox to a Surface
- Browser process receives an OnFrameAvailable event for the SurfaceTexture (GLConsumer) corresponding to the Surface
- Browser process (VrShellGl::DrawFrame) uses a local GL context to copy the SurfaceTexture into a GVR-provided FBO.
- Browser process submits the frame to GVR.
- GVR queues the frame, and later uses async reprojection to apply lens distortion correction while copying it to the screen.
This involves two explicit 1:1 pixel copies, plus the distort+draw step from async reprojection. The first two copies are redundant work and should be avoided.
The original plan for avoiding the first copy was to have the GPU process's GL context render WebVR content directly onto a custom Surface. See issue 655733 "WebVR mobile: render directly to the surface that we want". This is blocked on synchronization issues - drawing to a custom Surface is incompatible with virtualized contexts, and we have to use virtualized contexts on Qualcomm chipsets to avoid severe tearing for non-VR WebGL applications.
New plan is to use AHardwareBuffer-backed GPUMemoryBuffer objects instead of the Surface/SurfaceTexture pair, see issue 761432 .
Comment 1 by klausw@chromium.org
, Sep 1 2017