Android O's NDK supports AHardwareBuffer objects for cross-process and cross-GL-context shareable images. This seems like a natural fit for the existing GPUMemoryBuffer abstraction that's used on other platforms, and would enable a more efficient WebVR rendering path.
From https://developer.android.com/ndk/guides/stable_apis.html#a26
The native hardware buffer API (<android/hardware_buffer.h>) lets you
directly allocate buffers to create your own pipelines for cross-process
buffer management. You can allocate a AHardwareBuffer struct and use it
to obtain an EGLClientBuffer resource type via the
eglGetNativeClientBufferANDROID extension. You can pass that buffer to
eglCreateImageKHR to create an EGLImage resource type, which may then be
bound to a texture via glEGLImageTargetTexture2DOES on supported
devices. This can be useful for creating textures that may be shared
cross-process.
See also https://developer.android.com/ndk/reference/hardware__buffer_8h.html
Currently, Chrome's cross-context image sharing works via Surface/SurfaceTexture pairs (GLConsumer). This has the major restriction that drawing to the Surface must use the current context's default render buffer, it can't be bound to a framebuffer object (FBO). Changing the active Surface is only possible if properties such as depth size and multisample count match the current context properties, this conflicts with context virtualization which is required for Qualcomm chipsets (issue 691102)
Using AHardwareBuffer removes this restriction, they can be bound to an FBO without any context compatibility restrictions. This would make it possible to use them for WebVR rendering within a virtualized context, without needing changes to non-VR WebGL rendering.
Comment 1 by klausw@chromium.org
, Sep 1 2017