Currently we create two GrContexts in the GPU process.
(1) To detect if GPU has the necessary extensions early during GpuInit. This determines if we can support OOPR and is plumbed client side via GpuFeatureInfo. GrContext is thrown away. GPU workarounds aren't plumbed into GrContext.
(2) The RasterDecoder creates a GrContext to OOPR. GPU workarounds are plumbed into GrContext.
Longer term, we want to share a single GrContext for all RasterDecoders. Eventually even for the Viz DisplayCompositor (as a way of eliminating InProcCmdBuffer). This could be the context in (1) and it will need the GPU workarounds plumbed.
Relevant comment from crrev.com/c/1083883:
"""
It would require a refactoring to check the workarounds because of how the code is currently structured during GpuInit.
1) Create GLContext. Grab some GPUInfo from glGetStrings (among others). Try to create GrContext for OOP-R. Destroy GLContext.
2) Compute GpuFeatureInfo from GPUInfo, which includes the workarounds and whether we support OOPR.
AFAICT, creating a GrContext checks for GL version and extension support. Modifying the GL version presented to Skia, may cause Skia to look for different extensions. So if we change the version bassed on workarounds, we would have to compute the workarounds before where we create the GrContext.
"""
Comment 1 by piman@chromium.org
, Jun 14 2018Components: -Internals>GPU Internals>GPU>Internals