New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 835299 link

Starred by 3 users

Issue metadata

Status: Fixed
Closed: May 2018
EstimatedDays: ----
NextAction: ----
OS: Linux , Windows , Mac
Pri: 1
Type: Bug-Security

Show other hotlists

Hotlists containing this issue:

Sign in to add a comment

Security: Integer overflow in Swiftshader texture allocation

Project Member Reported by, Apr 20 2018

Issue description

This template is ONLY for reporting security bugs. If you are reporting a
Download Protection Bypass bug, please use the "Security - Download
Protection" template. For all other reports, please use a different

Please READ THIS FAQ before filing a bug:

Please see the following link for instructions on filing security bugs:

NOTE: Security bugs are normally made public once a fix has been widely

There's a remotely triggerable memory corruption issue in SwiftShader that's reachable from WebGL, resulting from an integer overflow issue.

In the GPU process there is validation on the sizes passed to texture creation functions to ensure that they shouldn't cause overflow. However, in the Swiftshader code there is a separate rounding up of render-target sizes to the next even size, which allows bypassing this validation. 

(Note the additional +4, which is also (unexpected by the chrome gpu process) unsafe but in practice shouldn't cause an issue.)

	void *Surface::allocateBuffer(int width, int height, int depth, int border, int samples, Format format)
		// Render targets require 2x2 quads
		int width2 = (width + 1) & ~1;
		int height2 = (height + 1) & ~1;

		// FIXME: Unpacking byte4 to short4 in the sampler currently involves reading 8 bytes,
		// and stencil operations also read 8 bytes per four 8-bit stencil values,
		// so we have to allocate 4 extra bytes to avoid buffer overruns.
		return allocate(size(width2, height2, depth, border, samples, format) + 4);

Size calculation takes place here:

unsigned int Surface::size(int width, int height, int depth, int border, int samples, Format format)
		width += 2 * border;
		height += 2 * border;

		// Dimensions rounded up to multiples of 4, used for compressed formats
		int width4 = align(width, 4);
		int height4 = align(height, 4);


		// ... snip ...
			return bytes(format) * width * height * depth * samples;

The maximum value for bytes(format) is 16, with something like GL_RGBA32F, and samples is 1.

We can't cause this value to overflow if we have to provide the texture contents, since allocating a sufficiently large Float32Array will be larger than the renderer memory limits, but we can use glTexStorage3D to trigger the overflow.

We need to meet the following conditions:

1 <= width <= 0x2000
1 <= height <= 0x2000
1 <= depth <= 0x2000

16 * width * height * depth <= 0x100000000ull;

If these conditions are met, and we can also produce values such that:

16 * ((width + 1)  & ~1) * ((height + 1)  & ~1) * depth >= 0x100000000ull;

Then we'll get an integer overflow during size calculation, and end up with a small buffer for a large texture.

If we use the path glTexSubImage3D to initialize the texture, this will zero out (Chrome's expected size) of the texture (~4gig) in the (260 byte) allocation, which may make exploitation awkward, but especially in a context like the GPU process with multiple threads interacting, it's likely possible to exploit this issue. There may also be alternative paths which avoid the wild memset, but I'm reporting now so that work on a fix can start.

Note, it is possible for an attacker to force use of the Swiftshader backend for WebGL rendering by simply crashing the GPU process a few times (for a platform dependent value of 'few'). The attached PoC uses 4 domains and cycles between them to trigger 3 (hardware accelerated) GPU process crashes due to OOM (on my workstation, at least) which will then be followed by the (software accelerated) GPU process hitting this bug. Mileage may vary with different GPU drivers/OpenGL implementations.

Crashes with the PoC will be fairly random - whatever you'd expect for zeroing out your entire heap...

Thread 1 "chrome" received signal SIGSEGV, Segmentation fault.
0x00007fe697e94551 in egl::Image::loadImageData(egl::Context*, int, int, int, int, int, int, unsigned int, unsigned int, egl::Image::UnpackInfo const&, void const*) ()
   from src/out/non-asan/swiftshader/
(gdb) bt
#0  0x00007fe697e94551 in egl::Image::loadImageData(egl::Context*, int, int, int, int, int, int, unsigned int, unsigned int, egl::Image::UnpackInfo const&, void const*) ()
    at src/out/non-asan/swiftshader/
#1  0x0000000000000000 in  ()
(gdb) x/10i $pc
=> 0x7fe697e94551 <_ZN3egl5Image13loadImageDataEPNS_7ContextEiiiiiijjRKNS0_10UnpackInfoEPKv+9911>:      jmpq   *0x28(%rax)
   0x7fe697e94554 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv>:        push   %rbp
   0x7fe697e94555 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+1>:      push   %r15
   0x7fe697e94557 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+3>:      push   %r14
   0x7fe697e94559 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+5>:      push   %r13
   0x7fe697e9455b <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+7>:      push   %r12
   0x7fe697e9455d <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+9>:      push   %rbx
   0x7fe697e9455e <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+10>:     sub    $0x48,%rsp
   0x7fe697e94562 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+14>:     mov    %r8d,0xc(%rsp)
   0x7fe697e94567 <_ZN12_GLOBAL__N_113LoadImageDataILNS_8DataTypeE1EEEviiiiiiiiiiPKvPv+19>:     test   %r9d,%r9d
(gdb) i r
rax            0x0      0
rbx            0x8814   34836
rcx            0x1      1
rdx            0x10     16
rsi            0x30b20f90860    3346332715104
rdi            0x30b2081f500    3346324911360
rbp            0x1406   0x1406
rsp            0x7ffdafd862c8   0x7ffdafd862c8
r8             0xfffffffffffffff0       -16
r9             0x1      1
r10            0x75     117
r11            0x30b2088ee90    3346325368464
r12            0xc2     194
r13            0x2a3    675
r14            0x8814   34836
r15            0x0      0
rip            0x7fe697e94551   0x7fe697e94551 <egl::Image::loadImageData(egl::Context*, int, int, int, int, int, int, unsigned int, unsigned int, egl::Image::UnpackInfo const&, void const*)+9911>
eflags         0x10202  [ IF RF ]
cs             0x33     51
ss             0x2b     43
ds             0x0      0
es             0x0      0
fs             0x0      0
gs             0x0      0

See crash-id d0573792cf03341d for a crash on the current stable branch.

To test using the attached PoC, either run chrome with --disable-gpu to force software rendering, or create 4 aliases to localhost,, and in your /etc/hosts file and run ./ <port_number> and point your browser to<port_number>.

This bug is subject to a 90 day disclosure deadline. After 90 days elapse
or a patch has been made broadly available, the bug report will become
visible to the public.

Chrome Version: [66.0.3359.117] + [stable]
Operating System: [linux]

Please include a demonstration of the security bug, such as an attached
HTML or binary file that reproduces the bug when loaded in Chrome. PLEASE
make the file as small as possible and remove any content not required to
demonstrate the bug.

Type of crash: GPU process
Crash State: d0573792cf03341d
Client ID (if relevant): [see link above]
901 bytes View Download
636 bytes View Download
272 bytes View Download

Comment 1 by, Apr 20 2018

Mark, thanks for the report!

I wonder why existing fuzz targets haven't found that, as we seem to cover the vulnerable code:

CC'd Marty and Jonathan in case they're interested, + GPU folks of course.

Comment 2 by, Apr 20 2018

There is another integer overflow though: issue 835306.
Probably just a case of needing a very particular set of three values to get through, since you need to be "small enough" to survive the command buffer validation checks and "large enough" to trigger the bug when the rounding up happens. I'd think even a fairly smart fuzzer would struggle to identify such a narrow portion of the state space, since it doesn't look (pathwise or such) any different from any other inputs.

Comment 4 by, Apr 20 2018

Yeah, good point regarding strict range of values needed to trigger that...

Btw, have you found it manually?
Yep, I found it looking through the code.

Comment 6 by, Apr 20 2018

Labels: Pri-2
Status: Assigned (was: Unconfirmed)
Thanks for reporting this! That's a really nice find for manual inspection.

Note that our tiny SwiftShader team generally operates on the garbage-in-garbage-out principle, to the extent allowed by the OpenGL specification. :) It's up to Chrome's GPU command buffer to make it safe for web use, which isn't unreasonable given that GPU drivers also have limitations and bugs which aren't under our direct control. Anyway, it's very arguable whether attempting to allocate a ~4 GB texture is asking for trouble or a legitimate thing. I'll treat it as the latter and fix it by producing an GL_OUT_OF_MEMORY, but note that GPU drivers may have similar issues and our only defense against those is the command buffer validation.

Comment 7 by, Apr 20 2018

At the command buffer level, we do compute the texture size, and validate using safe math, e.g. via GLES2Util::ComputeImageDataSizesES3

Note that I would generally expect drivers to set a GL_MAX_TEXTURE_SIZE / GL_MAX_3D_TEXTURE_SIZE that doesn't cause overflow in the driver when squared/cubed. Is that unreasonable for SwiftShader?

Comment 8 by, Apr 20 2018

Status: Started (was: Assigned)
No, that is totally reasonable. I wasn't aware that there's a separate GL enum for the 3D maximum size. This makes it a trivial fix. Thanks!

Comment 9 by, Apr 20 2018

Actually, most drivers report a maximum size of 2048, which will cause 32-bit overflow when cubed:

I don't really want to lower it down to 512 to be able to also support RGBA32F. It's quite reasonable for someone to request e.g. a 1024x1024x8 texture.

Since Surface::allocateBuffer() is called lazily on the first lock, and we have internal/external buffer pairs in case a format isn't supported 'natively', it's not easy to do the validation at that point and still inform the application about it. So instead I think we should compute the required size with 64-bit variables at texture image specification time, and produce a GL_OUT_OF_MEMORY if it's over a reasonable amount.

Our generated sampler code computes 32-bit offsets, so there's no point in trying to allow larger than 4 GB allocations even if technically Surface::allocateBuffer() itself could on a 64-bit platform, nor do I think any GPU supports this.

Comment 10 by, Apr 20 2018

Components: Internals>GPU

Comment 11 by, Apr 20 2018

Components: -Internals>GPU Internals>GPU>SwiftShader

Comment 12 by, Apr 21 2018

Labels: Security_Severity-Medium Security_Impact-Stable OS-Linux OS-Mac OS-Windows
Project Member

Comment 13 by, Apr 21 2018

Labels: M-66
Project Member

Comment 14 by, Apr 21 2018

Labels: -Pri-2 Pri-1
Project Member

Comment 15 by, May 17 2018

The following revision refers to this bug:

commit efdf103a3db442f2c27423aa5db7e6803f275ae6
Author: Nicolas Capens <>
Date: Thu May 17 21:03:02 2018

Refactor maximum texture dimensions.

OpenGL has separate implementation-defined texture size limits, for
3D textures and array textures. For now just give them the same value.

 Bug chromium:835299 

Change-Id: Ifaf537511f016e21992388f56598d5ec12a393b8
Tested-by: Nicolas Capens <>
Reviewed-by: Alexis Hétu <>


Project Member

Comment 16 by, May 17 2018

The following revision refers to this bug:

commit 419a5806a9ff678fba83f6df6783b87d013d3581
Author: Nicolas Capens <>
Date: Thu May 17 21:03:27 2018

Refactor surface buffer size calculation.

This eliminates the duplication between pitchB() and size(), and
ensures that the latter is the full allocation size.

 Bug chromium:835299 

Change-Id: Icf555ad497fb3b92fd00e9a3e6ced6810b2d310d
Tested-by: Nicolas Capens <>
Reviewed-by: Alexis Hétu <>


Project Member

Comment 17 by, May 17 2018

The following revision refers to this bug:

commit 607771b444748a7c1e5a6e4f8220053559870728
Author: Nicolas Capens <>
Date: Thu May 17 21:14:52 2018

Prevent 32-bit numeric overflow on image allocation.

We assume the pixels of an image can be addressed with a signed 32-bit
offset, including any padding. For a 3D image it's possible to exceed
this without exceeding the per-dimension limits. Lowering the per-
dimension limit so the allocation is always less than 2 GiB makes them
unreasonably small, so instead we must check the total size.

Use 1 GiB as the soft limit in OpenGL.

 Bug chromium:835299 

Change-Id: I9c5184002c1710e3923b549f8c21e7f6a516e1c7
Reviewed-by: Nicolas Capens <>
Tested-by: Nicolas Capens <>


Project Member

Comment 18 by, May 18 2018

The following revision refers to this bug:

commit 997bfdbc7f496a31e1675680d6b8ca098cd3884c
Author: Nicolas Capens <>
Date: Fri May 18 21:19:56 2018

Roll SwiftShader c8403ec..cbb80f5

BUG= chromium:835299 , chromium:825545



Change-Id: Ic37afa51eaa67f7a61902d2a3e2d8227065ae909
Reviewed-by: Alexis Hétu <>
Commit-Queue: Nicolas Capens <>
Cr-Commit-Position: refs/heads/master@{#560042}

Comment 19 by, May 18 2018

Status: Fixed (was: Started)
Project Member

Comment 20 by, May 19 2018

Labels: -Restrict-View-SecurityTeam Restrict-View-SecurityNotify
Labels: -M-66 M-68
Labels: Release-0-M68
Project Member

Comment 23 by, Aug 25

Labels: -Restrict-View-SecurityNotify allpublic
This bug has been closed for more than 14 weeks. Removing security view restrictions.

For more details visit - Your friendly Sheriffbot
Labels: CVE-2018-6174 CVE_description-missing

Sign in to add a comment