context.drawImage() slows down massively when drawing a few large images
Reported by
ja...@vectormagic.com,
Jan 23 2017
|
|||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36 Steps to reproduce the problem: To reproduce, go to: https://clippingmagic.com/images/22790420/edit/peobgkl5j8a5s1so8ntk2185agh403na8gric6g4rle0jjscuik2 That link should open up our background-removal app with a test image of a purse on a table. It will have two view panes, the left showing the original image, along with some red and green highlighter markings, and the right pane showing the same but with the background removed and only the purse showing. The default tool selection will be the hand tool, which allows you to click and drag on either view pane to pan the image in the view. To reproduce the bug, click and drag on the image and the panning operation will be very choppy and slow. I have reproduced this on three different computers, all running windows 10 and Chrome version 55.0.2883.87 m. The problem does not manifest on older versions on Chrome (I believe I tested version 50 or 51 and it worked fine) on Windows 10, nor does it manifest on the latest versions of Firefox or Edge for Windows. I have also tested the latest versions of Firefox, Safari, and Chrome on Mac and the problem does not appear. The sample image provided in that link is 8 megapixels. There are two view pane canvas elements for on-screen display. On the left pane, there are three 8 megapixel canvas elements drawn to the on-screen canvas element. On the right pane, there is a single 8 megapixel canvas element drawn to the on-screen canvas element. The problem does not appear for smaller test images, and in my development environment, I can get the problem to go away by commenting out the drawing of all the layers except the original image and the result image. That is, the problem manifests from drawing multiple large canvas elements to the same target canvas element. What is the expected behavior? It should pan the view pane at 30 to 60 fps. What went wrong? It is really slow and delivers about 1 to 2 fps. Did this work before? Yes 50 or 51, I'm not 100% certain which one. It may have worked after those versions as well. Does this work in other browsers? Yes Chrome version: 55.0.2883.87 Channel: stable OS Version: 10.0 Flash Version: Shockwave Flash 24.0 r0
,
Jan 23 2017
Repro confirmed. Texture uploads seem abnormally slow. I am seeing GPU texture uploads taking 35 to 40ms for a 2896-by-2896 bitmap.
,
Jan 23 2017
Is it possible to offer a bounty for this bug to be fixed promptly? Chrome is the #1 browser among our site's users and we'd like to get this solved for them as quickly as possible. Right now all we can do is to suggest switching to Firefox, and while we don't really have a dog in the browser fights, we'd rather not ask our users to change their habits on our behalf.
,
Jan 23 2017
10/4 Priority increased.
,
Feb 21 2017
There is a related bug happened on Android 7.0+ phones on Chromium pre to M45, which is, canvas's ctx.drawImage causes screen mess up.
I turns out there is a way to bypass it using software fallback:
PassOwnPtr<ImageBufferSurface> HTMLCanvasElement::createImageBufferSurface(const IntSize& deviceSize, int* msaaSampleCount)
{
VLOG(0)<<__FUNCTION__<<" *** enter";
OpacityMode opacityMode = !m_context || m_context->hasAlpha() ? NonOpaque : Opaque;
*msaaSampleCount = 0;
if (is3D()) {
VLOG(0)<<__FUNCTION__<<" *** is3D()";
// If 3d, but the use of the canvas will be for non-accelerated content
// (such as -webkit-canvas, then then make a non-accelerated
// ImageBuffer. This means copying the internal Image will require a
// pixel readback, but that is unavoidable in this case.
// FIXME: Actually, avoid setting m_accelerationDisabled at all when
// doing GPU-based rasterization.
if (m_accelerationDisabled)
return adoptPtr(new UnacceleratedImageBufferSurface(deviceSize, opacityMode));
return adoptPtr(new AcceleratedImageBufferSurface(deviceSize, opacityMode));
}
if (shouldAccelerate(deviceSize)) {
> VLOG(0)<<__FUNCTION__<<" *** shouldAccelerate";
if (document().settings())
*msaaSampleCount = document().settings()->accelerated2dCanvasMSAASampleCount();
OwnPtr<ImageBufferSurface> surface = adoptPtr(new Canvas2DImageBufferSurface(deviceSize, opacityMode, *msaaSampleCount));
if (surface->isValid())
return surface.release();
}
OwnPtr<RecordingImageBufferFallbackSurfaceFactory> surfaceFactory = adoptPtr(new UnacceleratedSurfaceFactory());
if (shouldUseDisplayList(deviceSize)) {
VLOG(0)<<__FUNCTION__<<" *** shouldUseDisplayList";
OwnPtr<ImageBufferSurface> surface = adoptPtr(new RecordingImageBufferSurface(deviceSize, surfaceFactory.release(), opacityMode));
if (surface->isValid())
return surface.release();
surfaceFactory = adoptPtr(new UnacceleratedSurfaceFactory()); // recreate because previous one was released
}
VLOG(0)<<__FUNCTION__<<" *** default createSurface";
return surfaceFactory->createSurface(deviceSize, opacityMode);
}
I thought maybe GPU->GPU sub texture transfer mechanism is affected by Android 7.0's new skia library version or gpu driver compatibility problems?
,
Feb 21 2017
My team encountered this issue in our application, which uses Openlayers 3. We noticed a dramatic reduction in tile rendering performance in Chrome 54 so I tested a few versions to narrow down where the regression may have occurred. In Chrome 53 and earlier, drawImage calls while panning the map were ~4% of the CPU trace. In Chrome 54 and later, drawImage was ~85% of the CPU trace. I also noticed clip calls jump from 4% to 25% of the trace in Chrome 50, so there may have been a performance regression there as well.
,
Apr 18 2017
Any progress on this? It is a huge deal for our customers, and something we'd be willing to pay to get fixed sooner rather than later. Our only work-around at this point is telling our customers to use Firefox or Edge, both of which don't have any problems with this.
,
Apr 18 2017
Hmm, I thought I had commented on this in the past with some additional information on how it affected us and our workarounds... Maybe I was mistaken (or it was the deleted comment) @ ja...@vectormagic.com Our most recent workaround was switching to WebGL. Luckily for us it was seamless since the canvas lib we use (CreateJS) has a new branch for WebGL support (similar to how Pixi.js handles it). But I know this may not work for everyone, as not all libraries provide this feature (and not every users machine will necessarily have WebGL support)
,
Apr 18 2017
WebGL is hard for us to switch to. We use the raw 2d canvas context, and the WebGL API is considerably different, especially regarding the drawing of bezier curves (if we were just compositing bitmaps, it would be a different story). I can't imagine this is all that complicated of a bug to fix, and it is clearly a regression bug, so we were hoping that it would be fixed promptly. If that is not the case, and we can expect this to remain broken for a long time, we'll need to actively direct our users to switch browsers, something we'd strongly prefer not to do.
,
Apr 24 2017
junov, is there a reason we still have to live with a "96 * 1024 * 1024" hard coded limit -- that I think is the root cause of the problem? I understand that there is probably a better way to permanently fix this -- but why can't we also get a 'quick fix' that increases that limit (dramatically) on systems that have the memory?
,
Apr 24 2017
The herd coded cache limits will go away when we can coordinate GPU memory consumption across process boundaries. Work on this change to Chrome's GPU infrastructure has started. See issue 706456
,
Apr 24 2017
The GPU infrastructure improvements might still take a long time to complete. Looking into temporary mitigation by boosting the cache limit on non-low-end devices ( i.e. machines with more than 512MB of physical memory ).
,
Apr 24 2017
We would very much appreciate temporary mitigation. It can, of course, be dismissed as a corner case, but where it does apply, it completely destroys the user experience. And it is a regression bug, which should presumably get high priority, since companies and users have come to expect a certain level of performance.
,
Apr 24 2017
I'm not sure we should just increase the memory like in https://codereview.chromium.org/2836063003/. Really we consider devices in the 1~2GB range to be pretty low memory, and 512MB to be extremely low. vmpstr@ increased image cache limits, I think for 4GB+ devices. I'd be more comfortable following the same pattern here.
,
Apr 25 2017
Chromium has been stuck at 96MB (24 MP) for a long time (see issue 305617 from 2013). And during this time frame, systems get more and more powerful (tons more memory), displays get much larger and more memory -- and while other web browsers handle this just fine (even IE10), But Chromium remains stuck WAY in the past at 24 MP. An HD screen is 2.0 MP. A 4k screen is 8.3 MP. An 8k screen is 33.1 MP. Back when the 96MB limit was first hard coded, that limit probably probably made sense given the standards of the day -- but that limit is woefully inadequate for most systems made in the last 4 years. Instead of hard coding a 192MB limit for 'powerful' systems -- please instead have the limit be a calculation based upon various factors? If I am on a powerful desktop system, why is there a low hard coded limit? The limit should be so high that I will not run into it.
,
Apr 25 2017
@#15: Good point. For devices in the IsLowEndDevice() the limit should probably be even lower than 96MB. I will try something a bit less binary.
,
Apr 25 2017
junov, less binary: how about a formula based upon AmountOfPhysicalMemoryMB(), with possible min/max to bind to a range?
,
Apr 25 2017
In general, we prefer a small number of device categories rather than a continuous formula because it makes it easier to replicate bugs reported by users. (you just need a machine that falls into the same category rather than an exact HW configuration match).
,
Apr 25 2017
junov/19: feels like one step forward and two steps back. The concerns can easily be addressed... How about (1) keeping the formula for 'kMaxGaneshResourceCacheBytes', to something like: bytes = 1024*1024 * min(2000,max(96,AmountOfPhysicalMemoryMB()/10)) [this is a very effective temp work around that works around the problem on powerful systems] and then (2) reporting that Ganesh Cache size number as part of something commonly reported back in bugs reports (about://gpu??? or something else) and (3) add a "--MaxGaneshResourceCacheBytes=#" command line option, so that testers can run in the same exact Ganesh memory configuration as reporters of bug reports. [this addresses the 'replicate bugs' issue] both issues resolved...
,
Apr 25 2017
+cc vmiura, vmpstr WDYT?
,
Apr 25 2017
Are we talking about this limit https://cs.chromium.org/chromium/src/content/renderer/gpu/render_widget_compositor.cc?l=471 (for the compositor image cache?) I think the GPU discardable should alleviate a lot of these issues by allowing us to have larger limits in individual compositors. For the short term solution, I think it makes sense to have a sliding value. Is the problem for machines between 1gb and 4gb or is the 256mb limit not sufficient for >4gb machines as well? I'm a bit concerned about typing RAM to GPU memory in these calculations, and 2gb limit proposed in #20 seems a bit large. +ericrk.
,
Apr 25 2017
Nevermind, I misunderstood which cache we're talking about. The rest of my comment still stands though.
,
Apr 25 2017
#21: I think in the short term the 3 tier model sounds OK and longer term we can increase it with the discardable texture cache work. #20: This proposal is good in many ways, though as junov@ mentioned we try bucket things into tiers. It's not just for debugging; having a more write-once, run-everywhere behavior is generally good for web developers and their users. We want to be pretty thoughtful with user's memory too, since users use many Chrome tabs and other apps at the same time. Sadly we more often hear that we have to reduce memory than the opposite.
,
Apr 26 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/c78b8bf07f4819342ee6d99796100cc62880718b commit c78b8bf07f4819342ee6d99796100cc62880718b Author: junov <junov@chromium.org> Date: Wed Apr 26 16:16:55 2017 Set Ganesh cache limit based on system physical RAM This is a temporary mitigation to alleviate web developer complaints that canvas rendering slows down with large working sets of images. It also reduces the cache size for very low end devices. BUG=683994 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2836063003 Cr-Commit-Position: refs/heads/master@{#467338} [modify] https://crrev.com/c78b8bf07f4819342ee6d99796100cc62880718b/gpu/skia_bindings/grcontext_for_gles2_interface.cc
,
Apr 26 2017
ja...@vectormagic.com, Visit https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefix=Win_x64/ and click on any item after 467338, download "chrome-win32.zip", extract into temp folder, run chrome.exe and test.
,
Apr 26 2017
Just tested, and it solved the issues that we were seeing. When can we expect to see this mitigation hit the end users of chrome? That is, what version will this ship with?
,
Apr 28 2017
I think we should merge this, if possible, as it has high impact on developers. Am I right in understanding that it is not completely fixed? If so, once we get a merge decision I think the priority can be downgraded.
,
Apr 28 2017
Your change meets the bar and is auto-approved for M59. Please go ahead and merge the CL to branch 3071 manually. Please contact milestone owner if you have questions. Owners: amineer@(Android), cmasso@(iOS), gkihumba@(ChromeOS), Abdul Syed@(Desktop) For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
May 1 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/5e293f6b0e7c1d0731e5b87e66417e2545248ba1 commit 5e293f6b0e7c1d0731e5b87e66417e2545248ba1 Author: Stephen Chenney <schenney@chromium.org> Date: Mon May 01 19:45:27 2017 Set Ganesh cache limit based on system physical RAM Merge to M-59 This is a temporary mitigation to alleviate web developer complaints that canvas rendering slows down with large working sets of images. It also reduces the cache size for very low end devices. BUG=683994 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2836063003 Cr-Commit-Position: refs/heads/master@{#467338} (cherry picked from commit c78b8bf07f4819342ee6d99796100cc62880718b) Review-Url: https://codereview.chromium.org/2854623003 . Cr-Commit-Position: refs/branch-heads/3071@{#333} Cr-Branched-From: a106f0abbf69dad349d4aaf4bcc4f5d376dd2377-refs/heads/master@{#464641} [modify] https://crrev.com/5e293f6b0e7c1d0731e5b87e66417e2545248ba1/gpu/skia_bindings/grcontext_for_gles2_interface.cc
,
May 1 2017
Leaving open in the hopes of a better fix, as per the "temporary mitigation" remark in the change.
,
Jul 25
,
Sep 10
This issue has come back in a major way with version 69. Our customers are reporting seriously degraded performance, and weird buggy behavior where canvas elements completely disappear or turn black. This is a significant regression.
,
Jan 17
(6 days ago)
We're also seeing this on Electron v4 (Chromium v69.0.3497.106) Seems like drawImage is taking a long time, and in many cases there's a Major GC at the end of it. You need to have a certain amount of canvases with 2D contexts for this to happen (until some limit is hit, it seems), in my particular case, it happens at after creating three 3000x2000. Hope this helps |
|||||||||||
►
Sign in to add a comment |
|||||||||||
Comment 1 by ja...@vectormagic.com
, Jan 23 2017