Mus+Ash: Determine how video capture will work and where FrameSubscriber should live. |
|||||||||||||||||||
Issue descriptionCopy requests of surfaces are now possible over mojo. Copy requests are a sensitive operation and so we've only exposed them on the private interface to CompositorFrameSink in the window server. These copy requests are then passed over to FrameSubscriber for video capture in the browser process. This is likely used for hangouts and cast and so on. In a Mus+Ash / servicified world, the browser will not be a privileged process. Thus, it cannot be the recipient of copy requests to children (can it?). We need to consider how FrameSubscriber (web_contents_video_capture_device) will work in Mus+Ash.
,
Feb 25 2017
,
Feb 28 2017
,
Feb 28 2017
> a "passive" approach where we issue copy requests after a "sampling decision," and *hope* they execute (which only tends to happen when a full browser composite occurs) I think this is referring to the fact copy requests go thru the UI compositor's layer tree right now. We should design this to be built directly around surfaces - either top level surfaces (application UI) or child surfaces (tab contents) or unparented/non-visible surfaces (offscreen). Then they'll be captured on each draw instead of being tied to the commit/submit rate from the browser UI like today. > It's not entirely clear to me how much of the object graph we want to move out of the browser process here. I think one key thought is that this will now be used for capturing general applications, it's no longer chrome-specific. It can use feedback signals from the App (from the browser for chrome) but it will also be used for android apps and whatever else chromeos could potentially run that are shown thru the display compositor. > 5. Throttling: It would be great, again for performance reasons, to have a reliable feedback mechanism as to how hard capture is hitting the GPU Gpu scheduler should give us some good ways to get feedback in the gpu process where display compositor and this stack would live. > As part of this migration, it would be great to have dedicated support for offscreen tab rendering and capture; If everything is done in terms of surfaces this should be very reasonable and come naturally I think. I suspect the hacks in aura are because copy requests are going thru the UI compositor's layer tree. If in terms of surfaces instead, in the display compositor, there's no need to ensure things are part of the "scene" but hidden and such.
,
Mar 1 2017
> > As part of this migration, it would be great to have dedicated support for offscreen tab rendering and capture; > If everything is done in terms of surfaces this should be very reasonable and come naturally I think. I suspect the hacks in aura are because copy requests are going thru the UI compositor's layer tree. If in terms of surfaces instead, in the display compositor, there's no need to ensure things are part of the "scene" but hidden and such. We currently have multiple display compositors, so we'd need to be able to pick one to draw the offscreen tab and be able to choose when to draw it independently vs. drawing it as part of drawing an onscreen frame. I agree that doing this based around surfaces should allow for a simpler mechanism, though.
,
Apr 19 2017
,
May 3 2017
,
May 3 2017
Taking ownership of prototype and initial design. It seems like we've settled on the general approach discussed in the prior comments on this bug. We had several face-to-face meetings and a long e-mail thread discussion among team members. To summarize: First, there was an idea to introduce a separate "capture cc::Display," cc::OutputSurface, cc::DisplayScheduler, etc., that would run alongside the normal cc::Display. This may still be the right long-term solution, but is not the one we are going to pursue for now (mostly due to logistics). There are numerous new concepts that would have to be accounted for to support this, such as: 1) The idea of sharing the results of RenderPasses across multiple Displays' draws, regardless of which Display executed the draw of the original RenderPass, to prevent redundant compositing. 2) Sharing a single BeginFrameSource, and resolving different desired framerates. The goal with capture is that, in the Mus world, we want to basically have the VIZ process capture any Surface with certain framerate/resolution constraints and policy, and emit results to an external process (e.g., a video encoder running in a render process). Therefore, we're going back to an earlier idea: We will simply add a "Frame Subscriber" concept to Surfaces, with the "subscriber" also running in the VIZ process alongside everything. "Subscribers" would be registered on a Surface, and SurfaceAggregator::Aggregate() would: 1. Prewalk the tree to identify the damage rect (as it does now). 2. Query any subscribers it encounters to determine if they want a texture containing a copy of the of the re-drawn region. 3. If so, add CopyOutputRequests to the root RenderPass (to have DirectRenderer execute them during Draw()). The copy result would be passed back to the subscriber as a texture mailbox handle. Downstream of this, we would execute the scaling and I420/YUV conversion; and handle all the other cross-process integration concerns. Basically, we would take relevant parts of the code from DelegatedFrameHost, content::GLHelper and media/capture/content, and consolidate them into a simpler chunk of code that runs in the VIZ process. All of the above is focused on changes in the compositor, but there is still design work to do to figure out how to integrate this with new Mus UI concepts and cross-process APIs. I'll begin prototyping on my desktop to figure this out, and then write-up a proposal design doc that encompasses all changes at all layers/boundaries: 1) compositing/surface changes; 2) Mus APIs; 3) integration with the media::VideoCapture stack.
,
May 3 2017
,
May 3 2017
,
May 4 2017
,
May 23 2017
,
Jun 8 2017
,
Jun 8 2017
Updates: 1. For interfacing (mojo), I wanted to make sure that both the Mus and non-Mus code could share the same capturer implementation (i.e., either from the window service or from the likes of DelegatedFrameHost). I'm currently working on Mojo interfaces that define the APIs needed to capture video of a CompositorFrameSink and be able to switch to capture different CompositorFrameSinks during operation (e.g., as RenderWidgetHostViews are created and destroyed during tab navigation). 2. I've reviewed the existing implementation of capture, GPU readback, the YUV conversion, etc.; and talked with a few stakeholders to get the "tribal knowledge" behind these. I've also discussed performance concerns w/ Hubbe, as well as color space handling (and possible future support for HDR). 3. I'm going to begin prototyping the expansion of cc::CopyOutputRequests to produce YUV textures. This will involve pulling-out some of the code from viz::GLHelper such that the YUV conversion code paths are woven into cc::GLRenderer's draw of a render pass. This should greatly improve performance and GPU memory usage since we no longer will need to make a full RGB texture copy after a render pass like we do now. Instead, we will be able to read from the original texture directly and do YUV conversion from that. More details as I discover them... :-) 4. Assuming #3 pans out, I'm almost certain the remaining "glue" will fall into place nicely. I'll be in a good position to write a design doc for wider-audience review. Ideas/Approaches have changed somewhat since c#8.
,
Jun 8 2017
,
Jun 8 2017
,
Jun 12 2017
,
Jun 13 2017
,
Jun 19 2017
,
Jun 19 2017
Update: Was OOO last week. On the few days prior to that, I made a ton of progress on prototyping. The only thing left to check is how to move YUV conversion into the direct renderer for the CopyOutputRequests. I've already figured out a simple way to extend the existing CopyOutputRequest API to request YUV texture results.
,
Nov 10 2017
,
Jan 17 2018
,
Jan 25 2018
Closing out this old bug. Since we designed the thing and the implementation at this point is mostly landed, I'd say we achieved the goal stated in the Summary. :) |
|||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||
Comment 1 by m...@chromium.org
, Feb 25 2017