fix 64 to 32 bit implicit conversions in GPU command buffer |
|||
Issue description
Following warnings when compiling 64-bit build with -Wshorten-64-to-32:
In file included from ../../gpu/command_buffer/client/transfer_buffer.cc:7:
In file included from ../..\gpu/command_buffer/client/transfer_buffer.h:17:
../..\gpu/command_buffer/client/ring_buffer.h(93,42): warning: implicit conversion loses integer precision: 'long long' to 'RingBuffer::Offset' (aka 'unsigned int') [-Wshorten-64-to-32]
return static_cast<int8_t*>(pointer) - static_cast<int8_t*>(base_);
~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gpu/command_buffer/client/transfer_buffer.cc(50,26): warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long long') to 'unsigned int' [-Wshorten-64-to-32]
default_buffer_size_ = base::bits::Align(default_buffer_size, alignment);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gpu/command_buffer/client/transfer_buffer.cc(51,22): warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long long') to 'unsigned int' [-Wshorten-64-to-32]
min_buffer_size_ = base::bits::Align(min_buffer_size, alignment);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gpu/command_buffer/client/transfer_buffer.cc(52,22): warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long long') to 'unsigned int' [-Wshorten-64-to-32]
max_buffer_size_ = base::bits::Align(max_buffer_size, alignment);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gpu/command_buffer/client/transfer_buffer.cc(125,24): warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long long') to 'unsigned int' [-Wshorten-64-to-32]
max_buffer_size_ = base::bits::Align(size / 2, alignment_);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gpu/command_buffer/client/transfer_buffer.cc(144,55): warning: implicit conversion loses integer precision: 'size_t' (aka 'unsigned long long') to 'unsigned int' [-Wshorten-64-to-32]
unsigned int current_size = HaveBuffer() ? buffer_->size() : 0;
~~~~~~~~~~~~ ~~~~~~~~~^~~~~~
6 warnings generated.
In file included from ../../gpu/command_buffer/client/ring_buffer.cc:7:
../..\gpu/command_buffer/client/ring_buffer.h(93,42): warning: implicit conversion loses integer precision: 'long long' to 'RingBuffer::Offset' (aka 'unsigned int') [-Wshorten-64-to-32]
return static_cast<int8_t*>(pointer) - static_cast<int8_t*>(base_);
~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
I've made two failed attempts to try and fix this, every time I try my instincts are to swap the unsigned int for a size_t and try and plumb that up, but then I hit a roadblock because certain parts of the GPU command buffer expect 32-bit values.
It seems like the lowest levels of the GPU command buffer should be using platform sizes and then the layers above that must always use 32-bit should do an explicit checked_cast or check before passing to/from the memory buffer layer, but I can't quite see the right place to do this.
Help needed...?
My previous attempt(s) are contained in CL https://chromium-review.googlesource.com/c/chromium/src/+/1199773 - look at PS6 and PS7 - PS6 was 'just allocator' and PS7 was 'everything' (PS7 failed misterably).
+piman and +kbr to help...?
,
Sep 14
From discussions with piman a few years ago, many of the data structures (and maximum buffer sizes, etc.) in Chrome's command buffer are deliberately 32-bit – to improve web application portability between 32- and 64-bit platforms, to make it harder to kill the system by allocating tons of large GPU resources, etc. I'm not sure what we should do here. Maybe explicitly use base::checked_cast to 32-bit types for some of the ptrdiff_t values you've found? That might incur more overhead than we can tolerate, though.
,
Sep 14
We still ship pepper, which may have a different bitness than the GPU process. Also, TIL, WebView may have different bitness between the browser and the renderer (in which case one is different than the GPU process). Either way, we need anything referenced in the command buffer (which includes shared memory offsets) to be of a fixed bitness, and we picked 32. That implies all shared memory is allocated with a size that's expressed within 32 bits, so it should always be safe to cast difference of pointers within that shared memory to 32 bits.
,
Sep 14
okay, if everything does really need to be 32-bit and 32-bit only, then there's not much point making any of the allocator routines handle size_t then... I'll go with plan in #0 and add a set of casts. I'll consider fixing Align to take a template. while I have you, I found some 'token' was unsigned int and some was signed. e.g. void FreePendingToken(Offset offset, int32_t token) in gpu/command_buffer/client/fenced_allocator.h and struct Block in same file... the unsigned -> signed transition happens in mapped_memory.h in call: https://cs.chromium.org/chromium/src/gpu/command_buffer/client/mapped_memory.h?l=90 are these different tokens and should they be signed or unsigned?
,
Sep 14
They are the same token, and should almost certainly be uint32_t everywhere (we expect them to wraparound and handle the case)
,
Sep 14
oh, nvm, -1 is used as a sentinel value, and we explicitly wraparound. Though I think the code is wrong/UB, sigh.
,
Sep 14
Looks like wfh@ is on the case; please reassign if I'm wrong.
,
Sep 14
ya I'm on the case, I'll draft a new CL to address #0. I'll probably deal with the unsigned/signed token at a later date :) |
|||
►
Sign in to add a comment |
|||
Comment 1 by wfh@chromium.org
, Sep 11