Support GPU hardware in headless mode |
|||||||||||||||||
Issue descriptionWe should make it possible to use a hardware GPU (as opposed to the software SwiftShader) in headless mode. ⛆ |
|
|
,
Sep 14 2017
,
Mar 12 2018
use case: we need to render approximately 500,000 webgl scenes at various resolutions. These renders take over 20 seconds in Swift Shader but are completely dominated by loading time on GPU. Also some of the shaders in some of the scenes use lower quality settings on SwiftShader than on GPU's.
,
Mar 13 2018
It would enable using Chromium as an offscreen WebGL GPU renderer (for video rendering -- for instance, it could enable live streaming webgl games). That is something that IMO does not exist yet.
,
Mar 13 2018
Re #3 "Also some of the shaders in some of the scenes use lower quality settings on SwiftShader than on GPU's." We intend for SwiftShader to be high quality and feature complete. Please file a bug at https://bugs.chromium.org/p/swiftshader if the output is not visually identical to rendering with a GPU.
,
Mar 13 2018
IIRC, the problem isn't quality, it's that SwiftShader isn't fast enough. We turn off features in our renderer based on extension support, browser fingerprint and timing information. The goal is a solid 30fps on most mobile browsers. That's not necessary for offscreen renders, but we want to use the same code as we do in our online renderer.
,
Apr 23 2018
Use cases: Chromium within gaming engines for HUD integrations, VR applications, as well as within streaming or broadcast video graphics. All of these applications require realtime performance of 60fps+ using off screen rendering such that the output can be added to an upstream video/graphics pipeline.
,
Apr 24 2018
+1, swiftshader isn't fast enough for complex scenes. E.g. a convolution shader is very slow in swift shader (minutes to render) vs seconds on a GPU.
,
May 20 2018
I'll put a strong +1 on this... using the GPU for WebGL on headless Chrome was our primary use case.... going to have to rethink architecture.
,
May 22 2018
Any feedback on this? Is it likely to be implemented any time soon? thanks
,
Jun 4 2018
,
Jun 25 2018
,
Jun 26 2018
+1 swiftshader isn't fast enough for the majority of our test cases.
,
Jun 28 2018
,
Jun 28 2018
Have received feedback from Amazon and Zynga over the past week that they need GPU acceleration support for WebGL in headless Chrome. Upgrading to P2.
,
Jul 2
,
Jul 4
skyostil@ pointed out in an email conversation that NVIDIA's drivers support EGL for OpenGL context creation without an X server: https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/ . If this code path were added to headless Chromium on Linux, would it satisfy customers' use cases?
,
Jul 26
I'm running instances on Amazon EC2 which seem to support Nvidia GPUs, so I'd be over the moon with anything that allowed headless Chrome to use 3D acceleration. Right now the only solution is to throw massive CPU power at the swift renderer, to try to make up for lack of GPU 3d acceleration for WebGL.
,
Jul 27
duke, we just run an Xdummy X server on EC2 and run Chrome without the --headless switch. puppeteer works fine without headless, whatever you're using might too.
,
Aug 4
>> we just run an Xdummy X server on EC2 and run Chrome without the --headless switch. Thanks for the tip but I need functionality that is only available in headless mode.
,
Aug 5
Any feedback as to when this is likely to get some movement? Sorry to harass but right now I'm doing some rendering that is pretty much real time with hardware GPU and taking 55 seconds per frame with headless Chrome. I'd love to see a bump in that speed. :-)
,
Aug 6
>> we just run an Xdummy X server on EC2 and run Chrome without the --headless switch. Would you mind sharing your setup? I'm trying the same (with Xorg), but it crashes.
,
Aug 20
,
Aug 21
Eric, are there any code pointers you could share with Ken? He was open to looking into it and assessing the complexity of the work, but he needs to better understand where compositing surfaces is wired in headless, etc. etc.
,
Aug 22
Hm, Sami may have more insights into the GPU hookup, here are some pointers I can give. Generally, we currently use runtime conditionals to avoid initializing the GPU in a few places right now, see e.g. [1]. We're supporting two different build configurations for headless right now. They differ in the graphics backend they use: (a) Bundled with chrome (--headless). This uses default build args for the platform. On linux and windows, this uses aura (with default backend) and a HeadlessWindowTreeHost [2]. On mac, it isn't really fully headless and creates a hidden view [3]. (b) Standalone headless_shell build using //build/args/headless.gn. This is only supported on linux and uses ozone + its "headless" platform underneath aura, to avoid any dependencies on a window server. [1] https://cs.chromium.org/search/?q=kheadless+f:(gpu%7Cviz%7Cbrowser_main_loop) [2] https://cs.chromium.org/chromium/src/headless/lib/browser/headless_browser_impl_aura.cc?l=39 [3] https://cs.chromium.org/chromium/src/headless/lib/browser/headless_browser_impl_mac.mm?l=75
,
Aug 22
Thanks for the pointers Eric. Based on your reading of customers' feedback above, it sounds like they're using build configuration (a), right? (Full Chrome, run with --headless?)
,
Aug 23
Not sure. I think ideally we'd want to support GPUs in environments without an X server, and AFAIU (a) links against X server libs, so requires it to be around in some form. Folks running headless chrome in the cloud like #18 would probably use (b). But I dunno how feasible it is to add the init path mentioned in #17?
,
Aug 23
It's all feasible. We need to know which platforms to prioritize. I've emailed some of the commenters above for their feedback.
,
Aug 24
Platform wise, we are interested in linux with --headless ; host GPU hardware would be Intel-based (i915).
,
Aug 24
Similar to #18, were running in GCE or GKE with GPU attached. Linux.
,
Aug 26
I'd like to use headless Chrome with GPU on Linux on Amazon EC2 and Google Compute Engine GOU instances. Amazon: EC2 P2 instances are intended for general-purpose GPU compute applications. Features: High frequency Intel Xeon E5-2686 v4 (Broadwell) processors High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory Supports GPUDirect™ for peer-to-peer GPU communications Google: https://cloud.google.com/compute/docs/gpus/ thanks!
,
Aug 31
Based on feedback from one customer, they would need the hardware-accelerated video encoding framework (vaapi) on Linux working too, so that WebGL can be fed into a MediaRecorder and remain hardware-accelerated. They also mentioned they would like window.requestAnimationFrame – which is usually tied to the display's refresh rate – to either fire as quickly as possible, or to fire at a target fixed framerate.
,
Aug 31
Feedback from another customer: Currently running on Windows, but looking to move to Linux. Currently using an X server there but would like to move to --headless.
,
Aug 31
Would like hardware WebGL with beginframe support.
,
Sep 1
#32: BeginFrameControl should allow controlling the animation frame rate in theory. It might not work reliably with surfaces other than renderer frames though. See bit.ly/headless-rendering for details.
,
Sep 19
,
Sep 19
We could enable ozone-gbm for headless_shell. In this case, we could test all the hardware acceleration features, but this is not actually headless because it launches a chromeless webview. I only tested it on Intel GPU and it works well
,
Sep 23
What we need for our use case: * access to the gpu memory where it’s rendered * a way to control when the frame is rendered, so we can control the FPS * windows support At the moment we're using cef-mixer (https://github.com/daktronics/cef-mixer) to get the OSR performance we require. The cef-mixer PR hits all our requirements, but the frame control can be a bit annoying to use. Save for accessing the gpu memory that is DirectX centric, the rest should be cross platform. Perhaps it would be a good start for solving this issue.
,
Sep 25
* access to the gpu memory where it’s rendered Do you want to dump the GPU memory as a file? * a way to control when the frame is rendered, so we can control the FPS Do you want to speed up/down the FPS? * windows support Do you want to see a webview on the screen?
,
Sep 25
* Do you want to dump the GPU memory as a file? We would like access to the memory in the lowest latency way possible, such as a GPU texture handle. That way we can use it for many different purposes. Like converting to YUV, compositing with other renders, rendering to a custom display, or sending over the network. * Do you want to speed up/down the FPS? Yes. * Do you want to see a webview on the screen? No. Apologies, I mean support for the Microsoft Windows OS.
,
Jan 3
Are there any updates on this? We want to use puppeteer for intensive WebGL rendering, but for now this doesn't seem possible with GPU acceleration except in headful mode.
,
Jan 3
#42: No updates - no engineers available to work on this at the moment. How do you aim to get the rendering results out of the headless Chrome instance? We want to make sure that all of the code paths are GPU-accelerated; doing a glReadPixels at the end of the frame will destroy performance. Thanks.
,
Jan 4
We just require a screenshot of the rendered scene, this shouldn't affect the performance
,
Jan 7
,
Jan 19
(4 days ago)
Our requirement is similar to comments #18 and #30 |
||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||
Comment 1 by skyos...@chromium.org
, Sep 14 2017