New issue
Advanced search Search tips

Issue 722979 link

Starred by 1 user

Issue metadata

Status: WontFix
Owner:
Closed: May 2017
Cc:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug-Regression



Sign in to add a comment

17kb regression in resource_sizes (MonochromePublic.apk) at 471361:471361

Project Member Reported by agrieve@chromium.org, May 16 2017

Issue description

Caused by: Reland of gpu: GPU service scheduler
https://codereview.chromium.org/2876913003
 
All graphs for this bug:
  https://chromeperf.appspot.com/group_report?bug_id=722979

Original alerts at time of bug-filing:
  https://chromeperf.appspot.com/group_report?keys=agxzfmNocm9tZXBlcmZyFAsSB0Fub21hbHkYgIDg6uD17gsM


Bot(s) for this bug's original alert(s):

Android Builder
Project Member

Comment 3 by 42576172...@developer.gserviceaccount.com, May 16 2017


=== BISECT JOB RESULTS ===
Bisect failed for unknown reasons

Please contact the team (see below) and report the error.


Bisect Details
  Configuration: android_nexus7_perf_bisect
  Benchmark    : resource_sizes
  Metric       : MonochromePublic.apk_Specifics/normalized apk size


To Run This Test
  src/build/android/resource_sizes.py --chromium-output-directory {CHROMIUM_OUTPUT_DIR} --chartjson {CHROMIUM_OUTPUT_DIR}/apks/MonochromePublic.apk

Debug Info
  https://chromeperf.appspot.com/buildbucket_job_status/8979409325240274688

Is this bisect wrong?
  https://chromeperf.appspot.com/bad_bisect?try_job_id=5893554079006720


| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Speed>Bisection.  Thank you!
Project Member

Comment 5 by 42576172...@developer.gserviceaccount.com, May 16 2017


=== BISECT JOB RESULTS ===
Bisect failed for unknown reasons

Please contact the team (see below) and report the error.


Bisect Details
  Configuration: android_nexus7_perf_bisect
  Benchmark    : resource_sizes
  Metric       : MonochromePublic.apk_Specifics/normalized apk size


To Run This Test
  src/build/android/resource_sizes.py --chromium-output-directory {CHROMIUM_OUTPUT_DIR} --chartjson {CHROMIUM_OUTPUT_DIR}/apks/MonochromePublic.apk

Debug Info
  https://chromeperf.appspot.com/buildbucket_job_status/8979397597290455504

Is this bisect wrong?
  https://chromeperf.appspot.com/bad_bisect?try_job_id=5893554079006720


| O O | Visit http://www.chromium.org/developers/speed-infra/perf-bug-faq
|  X  | for more information addressing perf regression bugs. For feedback,
| / \ | file a bug with component Speed>Bisection.  Thank you!
Cc: sunn...@chromium.org
Looking at the CL I'd imagine this size jump is expected. sunnyps@, see the size diff below and check if anything is unexpected or could be reduced.

Section Sizes (Total=12.7kb (12956 bytes)):
    .bss: 0 bytes (0 bytes) (not included in totals)
    .data: 0 bytes (0 bytes) (0.0%)
    .data.rel.ro: 0 bytes (0 bytes) (0.0%)
    .data.rel.ro.local: 16 bytes (16 bytes) (0.1%)
    .rodata: 256 bytes (256 bytes) (2.0%)
    .text: 12.4kb (12684 bytes) (97.9%)

160 symbols added (+), 53 changed (~), 77 removed (-), 381386 unchanged (not shown)
2 paths added, 0 removed, 19 changed

Showing 290 symbols (290 unique) with total pss: 12536 bytes
.text=12.0kb     .rodata=256 bytes  .data*=16 bytes   .bss=3 bytes    total=12.2kb
Number of unique paths: 25

Index, Running Total, Section@Address, PSS
------------------------------------------------------------
+ 0)       1044 (8.3%)  t@0x1808808  1044    gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::RunNextTask
+ 1)       1760 (14.0%) t@0x18080bc  716     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::deque::__add_back_capacity
+ 2)       2318 (18.5%) t@0x188dfb0  558     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::emplace
+ 3)       2838 (22.6%) t@0x1807e4c  520     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::deque::__add_front_capacity
+ 4)       3350 (26.7%) t@0x18078ac  512     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::TryScheduleSequence
+ 5)       3858 (30.8%) t@0x1808fc0  508     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::vector::insert
- 6)       3380 (27.0%) t@0x188b0a6  -478    gpu/ipc/service/gpu_channel.cc
               std::__ndk1::__hash_table::__node_insert_unique
+ 7)       3808 (30.4%) t@0x188ebcc  428     gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannel::GpuChannel
+ 8)       4224 (33.7%) t@0x1808c94  416     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::ScheduleTask
+ 9)       4624 (36.9%) t@0x24e7c58  400     content/renderer/render_thread_impl.cc
               content::CreateOffscreenContext
- 10)      4224 (33.7%) t@0x24e4b38  -400    content/renderer/render_thread_impl.cc
               content::CreateOffscreenContext
- 11)      3840 (30.6%) t@0x188bbfc  -384    gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannel::GpuChannel
+ 12)      4216 (33.6%) t@0x87d7e0   376     gpu/ipc/client/command_buffer_proxy_impl.cc
               gpu::CommandBufferProxyImpl::Create
- 13)      3840 (30.6%) t@0x87d7a4   -376    gpu/ipc/client/command_buffer_proxy_impl.cc
               gpu::CommandBufferProxyImpl::Create
+ 14)      4206 (33.6%) t@0x188d920  366     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::emplace
+ 15)      4570 (36.5%) t@0x188dc48  364     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::emplace
~ 16)      4890 (39.0%) t@0x188e7ac  320     gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageFilter::OnMessageReceived
+ 17)      5206 (41.5%) t@0x1808e84  316     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__split_buffer::push_back
+ 18)      5514 (44.0%) t@0x1807028  308     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::~Sequence
+ 19)      5806 (46.3%) t@0x188b674  292     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::assign
+ 20)      6092 (48.6%) t@0x188f212  286     gpu/ipc/service/gpu_channel_manager.cc
               gpu::GpuChannelManager::GpuChannelManager
- 21)      5818 (46.4%) t@0x188c302  -274    gpu/ipc/service/gpu_channel_manager.cc
               gpu::GpuChannelManager::GpuChannelManager
+ 22)      6086 (48.5%) t@0x18077a0  268     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::RebuildSchedulingQueue
+ 23)      6350 (50.7%) t@0x180752c  264     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::remove
+ 24)      6582 (52.5%) t@0x180863c  232     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::ShouldYield
+ 25)      6794 (54.2%) t@0x1807c4c  212     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__split_buffer::push_back
+ 26)      7006 (55.9%) t@0x1807d78  212     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__split_buffer::push_back
+ 27)      7214 (57.5%) t@0x1808408  208     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__push_heap_front
+ 28)      7422 (59.2%) t@0x1807aac  208     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__split_buffer::push_front
+ 29)      7630 (60.9%) t@0x1807b7c  208     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__split_buffer::push_front
+ 30)      7838 (62.5%) t@0x180765c  208     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::vector::__push_back_slow_path
+ 31)      8046 (64.2%) t@0xf5bc80   208     services/ui/public/cpp/gpu/context_provider_command_buffer.cc
               ui::ContextProviderCommandBuffer::ContextProviderCommandBuffer
- 32)      7838 (62.5%) t@0xf5bc0c   -208    services/ui/public/cpp/gpu/context_provider_command_buffer.cc
               ui::ContextProviderCommandBuffer::ContextProviderCommandBuffer
+ 33)      8038 (64.1%) t@0x18091bc  200     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::CreateSequence
- 34)      7840 (62.5%) t@0x188afe0  -198    gpu/ipc/service/gpu_channel.cc
               std::__ndk1::__hash_table::__rehash
+ 35)      8028 (64.0%) t@0x18073f0  188     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::vector::__push_back_slow_path
- 36)      7844 (62.6%) t@0x1888384  -184    gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageQueue::GpuChannelMessageQueue
+ 37)      8026 (64.0%) t@0x188a980  182     gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageQueue::GpuChannelMessageQueue
+ 38)      8206 (65.5%) t@0x1807280  180     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::BeginTask
~ 39)      8385 (66.9%) r@Group      179     {{no path}}
               ** merge strings (count=6)
+ 40)      8555 (68.2%) t@0x188d546  170     gpu/ipc/service/gpu_channel.cc
               base::internal::flat_tree::erase
~ 41)      8391 (66.9%) t@0x188d70c  -164    gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannel::OnDestroyCommandBuffer
+ 42)      8551 (68.2%) t@Group      160     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::~Scheduler (count=2)
+ 43)      8707 (69.5%) t@0x180853c  156     gpu/command_buffer/service/scheduler.cc
               base::internal::flat_tree::erase
+ 44)      8863 (70.7%) t@0x180876c  156     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::SyncTokenFenceReleased
+ 45)      8999 (71.8%) t@0x188efb8  136     gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannel::HandleMessage
+ 46)      9127 (72.8%) t@0x1808388  128     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::ScheduleTask
+ 47)      9247 (73.8%) t@0x1808c1c  120     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::ContinueTask
+ 48)      9367 (74.7%) t@0x1807334  120     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::FinishTask
+ 49)      9483 (75.6%) t@0x180772c  116     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::__push_heap_back
+ 50)      9587 (76.5%) t@0x1808054  104     gpu/command_buffer/service/scheduler.cc
               std::__ndk1::deque::push_front
+ 51)      9691 (77.3%) t@0x188d8b6  104     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::__swap_out_circular_buffer
+ 52)      9791 (78.1%) t@0x188ae2c  100     gpu/ipc/service/gpu_channel.cc
               base::BindOnce
+ 53)      9891 (78.9%) t@0x1806fc4  100     gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::SchedulingState::AsValue const
+ 54)      9987 (79.7%) t@0x1806f10  96      gpu/command_buffer/service/scheduler.cc
               base::internal::Invoker::Run
~ 55)     10083 (80.4%) t@0x1893358  96      gpu/ipc/service/gpu_command_buffer_stub.cc
               gpu::GpuCommandBufferStub::Initialize
~ 56)     10179 (81.2%) t@0x16e62f8  96      services/ui/gpu/gpu_service.cc
               ui::GpuService::InitializeWithHost
- 57)     10085 (80.4%) t@0x1888f28  -94     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::__hash_table::find
+ 58)     10169 (81.1%) t@0x1806f70  84      gpu/command_buffer/service/scheduler.cc
               base::internal::Invoker::Run
~ 59)     10253 (81.8%) t@0x188e1dc  84      gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannel::OnCreateCommandBuffer
+ 60)     10337 (82.5%) t@0x188a7c4  84      gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageFilter::GpuChannelMessageFilter
- 61)     10253 (81.8%) t@0x188bd9a  -84     gpu/ipc/service/gpu_channel.cc
               std::__ndk1::deque::pop_front
+ 62)     10333 (82.4%) t@0x1808e34  80      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::DestroySequence
+ 63)     10411 (83.0%) t@0x76d28c   78      gpu/command_buffer/common/scheduling_priority.cc
               gpu::SchedulingPriorityToString
+ 64)     10485 (83.6%) t@0x188da8c  74      gpu/ipc/service/gpu_channel.cc
               base::flat_map::operator[]
+ 65)     10559 (84.2%) t@0x188e3da  74      gpu/ipc/service/gpu_channel.cc
               base::internal::Invoker::RunOnce
- 66)     10487 (83.7%) t@0x1888198  -72     gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageFilter::GpuChannelMessageFilter
+ 67)     10559 (84.2%) t@0x1808724  72      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::EnableSequence
~ 68)     10627 (84.8%) t@0x188e5f8  68      gpu/ipc/service/gpu_channel.cc
               gpu::GpuChannelMessageQueue::FinishMessageProcessing
+ 69)     10695 (85.3%) t@0x18085f8  68      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::DisableSequence
+ 70)     10763 (85.9%) t@0x18073ac  68      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Scheduler
+ 71)     10831 (86.4%) t@0x180723c  68      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::UpdateSchedulingState
+ 72)     10899 (86.9%) t@0x188d824  68      gpu/ipc/service/gpu_channel.cc
               std::__ndk1::vector::__move_range
+ 73)     10963 (87.5%) t@0x18074ec  64      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::AddReleaseFence
+ 74)     11027 (88.0%) t@0x18074ac  64      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::AddWaitFence
+ 75)     11091 (88.5%) t@0x18071fc  64      gpu/command_buffer/service/scheduler.cc
               gpu::Scheduler::Sequence::IsRunnable const
+ 76)     11155 (89.0%) t@0x180c288  64      gpu/command_buffer/service/sync_point_manager.cc
               gpu::SyncPointManager::GetSyncTokenReleaseSequenceId
+ 77)     11217 (89.5%) t@0x188d688  62      gpu/ipc/service/gpu_channel.cc
               base::internal::flat_tree::find const


agrieve@ - one really weird thing is that there was somehow an extra android res .xml file added.

<       968  2001-01-01 00:00   res/color/bookmark_drawer_text_color.xml

Only 1kb but still probably worth looking into.
diff_results.txt
50.8 KB View Download
+ please to c#6 :).
Nothing looks out of place. I guess RunNextTask and TryScheduleSequence are big because the vector and heap calls are inlined into them? The other things are from deque, vector, flat_map, etc. and I'm not sure what we can do about that.
Oh one more thing, we'll be removing some code as a result of this work. The GpuChannelMessageQueue class and some methods on GpuChannel will go away so maybe ultimately it'll be a wash?
Cc: estevenson@chromium.org
Status: WontFix (was: Assigned)
Sounds good to me, we can close this one then. It would be great if you commented here once some of those changes have landed so we can verify. Thanks!

Sign in to add a comment