Issue metadata
Sign in to add a comment
|
WebGL hangs on chrome android
Reported by
ranb...@gmail.com,
Jan 2 2018
|
||||||||||||||||||||||
Issue descriptionSteps to reproduce the problem: 1. Go to this link: http://vb.hexa3d.io/webplayer.html?load=/views/production/item/2017220/1095408110516591/1095408110516591.json&autorotate=true&texturepreload=zipper_col.png,TO039EWNSZ36_col.png on an android device. 2. The Chrome application will crush (not just the tab). If it doesn't crashes try several times. What is the expected behavior? I would expect the browser to not crush like it did on the earlier versions and like it's behaving on any other browser or platform. Also, I would like to know the exact reason that it crashes (nothing on chrome://crashes/) What went wrong? On the last few weeks this web-page: http://vb.hexa3d.io/webplayer.html?load=/views/production/item/2017220/1095408110516591/1095408110516591.json&autorotate=true&texturepreload=zipper_col.png,TO039EWNSZ36_col.png is crashing (almost always) on every Android device (chrome issue only). Did this work before? Yes The issue is occurring for something like a month now Does this work in other browsers? Yes Chrome version: 63.0.3239.111 Channel: stable OS Version: 7.0.0 Flash Version: I'm not even sure this is even a WebGL problem - I'm guessing . . . Anyway, the application should never crash, only the tab (worst case). Plus, if no other browsers is crashing (and no other platforms) and the same code used to work in the past I'm pretty sure the problem is with the browser itself.
,
Jan 3 2018
Tested the issue in Android and able to reproduce the issue. Steps Followed: 1. Launched the Chrome Browser. 2. Navigated to the URL: vb.hexa3d.io/webplayer.html?load=/views/production/item/2017220/1095408110516591/1095408110516591.json&autorotate=true&texturepreload=zipper_col.png,TO039EWNSZ36_col.png 3. Observed that Chrome gets hanged within few seconds and user has to kill the Chrome application. Chrome versions tested: 63.0.3239.111(Stable), 65.0.3309.0(Canary) OS: Android 8 Android Devices: Pixel 2 Using the per-revision bisect providing the bisect results, Good build: 63.0.3238.0 (508208) Bad build: 63.0.3239.111 (508578) You are looking for a change made after 508359(GOOD), but before 508360(BAD). CHANGELOG URL: The script might not always return single CL as suspect as some perf builds might get missing due to failure. https://chromium.googlesource.com/chromium/src/+/c15174fe589f664d882bdf37d6f4706a4622d69d From the CL above, assigning the issue to the owner concerned. @fserb: Could you please look into the issue, pardon me if it has nothing to do with your changes and if possible please assign it to owner concerned. Please navigate to below link for log's and video-- go/chrome-androidlogs/798400 Note: This issue is not observed in Desktop.
,
Jan 5 2018
junov@ -- CC'ing reviewer of the CL as fserb@ last visit is displayed as 16 days ago.
,
Jan 5 2018
Can we get this fixed for M64?
,
Jan 5 2018
I also pinged fserb@. Hope they look into this issue as soon as possible.
,
Jan 6 2018
That CL is absolutely not the problem. It reduces the size for accelerated/non-accelerated threshold on 2D canvas, it doesn't have anything to do with WebGL. I can investigate more once I'm back in the office next week, but it's not def not that CL.
,
Jan 8 2018
@fserb -- Thanks for looking into this issue. While running the script, script is giving different results for each run and results seems to be inconsistent in the provided range in C#2. Attaching screenshots for reference. Also, attached the behavior of the good build #63.0.3238.0 here: go/chrome-androidlogs/798400 Testing on Pixel Android 8.1 also gave the same behavior. Thanks!
,
Jan 8 2018
It could also be that this is NOT WebGL related at all. I can't see any WebGL errors on the log (I do see a Sqlite one), and I don't have a Pixel2 to repro this. I've added a couple of components that may be related, in the hope that someone on triage can understand this better.
,
Jan 8 2018
,
Jan 8 2018
Not totally reliable for me, but reproduces on my Pixel. I'll see what I can find.
,
Jan 8 2018
It took several tries to get this bisect right, but I finally have a result that seems correct (Nexus 6P): https://chromium.googlesource.com/chromium/src/+/9b8fb34db903643b759635086dcf0da90b1e3082 - assigning sunnyps. This has been in since 62.0.3202.66, per https://chromiumdash-staging.googleplex.com/commit/9b8fb34db903643b759635086dcf0da90b1e3082
,
Jan 8 2018
,
Jan 8 2018
Forgot to say, removed RBS since this has been in stable for a while and doesn't seem to be a very widespread issue. Although it's hard to tell since we apparently don't have crash reporting on this. cc vmiura too, since sunnyps is OOO.
,
Jan 15 2018
I can confirm crashes on Samsung Galaxy Note 3 and Xiaomi Note 4.
,
Jan 15 2018
Also Google Nexus 7
,
Jan 17 2018
,
Jan 22 2018
This wasn't reproducing for me on ToT (24ef5b4438b894245f73e77ff8f95bc6c15ee083) for many attempts. Then I tried Chrome stable 63.0.3239.111 and it reproduced instantly. Same on Chrome dev 65.0.3322.0. And now it's reproducing on the same ToT build every time.
,
Jan 22 2018
Browser hangs in this stack trace:
#0 0xecf98cc8 in syscall () from /tmp/adb-gdb-libs-FA79Y1A01125/system/lib/libc.so
#1 0xecfc7276 in __pthread_cond_timedwait(pthread_cond_internal_t*, pthread_mutex_t*, bool, timespec const*) ()
from /tmp/adb-gdb-libs-FA79Y1A01125/system/lib/libc.so
#2 0xcf39d246 in base::ConditionVariable::Wait (this=0xfff86ce4) at ../../base/synchronization/condition_variable_posix.cc:71
#3 0xcf39d6a4 in base::WaitableEvent::TimedWaitUntil (this=0xfff86d8c, end_time=...) at ../../base/synchronization/waitable_event_posix.cc:223
#4 0xcf39d5ca in base::WaitableEvent::Wait (this=0xfff86ce4) at ../../base/synchronization/waitable_event_posix.cc:154
#5 0xcc10e098 in gpu::GpuChannelHost::Send (this=<optimized out>, msg=<optimized out>) at ../../gpu/ipc/client/gpu_channel_host.cc:83
#6 0xcc10c2d2 in gpu::CommandBufferProxyImpl::Send (this=0xc45c3400, msg=0x89) at ../../gpu/ipc/client/command_buffer_proxy_impl.cc:715
#7 0xcc10c5f8 in gpu::CommandBufferProxyImpl::WaitForGetOffsetInRange (this=0xc45c3400, set_get_buffer_count=1, start=5817, end=5796)
at ../../gpu/ipc/client/command_buffer_proxy_impl.cc:372
#8 0xcc0f76d4 in gpu::CommandBufferHelper::WaitForGetOffsetInRange (this=0xc45e0fd0, start=0, end=<optimized out>)
at ../../gpu/command_buffer/client/cmd_buffer_helper.cc:183
#9 0xcc0f7a36 in gpu::CommandBufferHelper::WaitForAvailableEntries (this=0xc45e0fd0, count=20) at ../../gpu/command_buffer/client/cmd_buffer_helper.cc:340
#10 0xcbb742b8 in gpu::CommandBufferHelper::GetSpace (this=0xc45e0fd0, entries=<optimized out>) at ../../gpu/command_buffer/client/cmd_buffer_helper.h:135
#11 0xcbb82962 in gpu::gles2::GLES2CmdHelper::UniformMatrix4fvImmediate (this=0xfff86ce4, location=1, count=<optimized out>, transpose=<optimized out>,
value=0xfff87050) at ../../gpu/command_buffer/client/gles2_cmd_helper_autogen.h:1972
#12 0xca6db3f0 in viz::GLRenderer::SetShaderMatrix (this=<optimized out>, transform=...) at ../../components/viz/service/display/gl_renderer.cc:2744
#13 viz::GLRenderer::DrawContentQuadNoAA (this=0xc4243e00, quad=<optimized out>, resource_id=<optimized out>, clip_region=<optimized out>)
at ../../components/viz/service/display/gl_renderer.cc:2198
#14 0xca6dab76 in viz::GLRenderer::DrawContentQuad (this=0xc4243e00, quad=<optimized out>, resource_id=<optimized out>, clip_region=<optimized out>)
at ../../components/viz/service/display/gl_renderer.cc:1983
#15 0xca6cc7d6 in viz::DirectRenderer::DoDrawPolygon (this=0xc4243e00, poly=..., render_pass_scissor=..., use_render_pass_scissor=<optimized out>)
at ../../components/viz/service/display/direct_renderer.cc:414
#16 0xca6c8048 in viz::BspWalkActionDrawPolygon::operator() (this=0xfff8736c, item=<optimized out>)
at ../../components/viz/service/display/bsp_walk_action.cc:32
#17 0xca6ce29a in viz::BspTree::WalkInOrderAction<viz::BspWalkActionDrawPolygon> (action_handler=<optimized out>, item=0x89, this=<optimized out>)
at ../../components/viz/service/display/bsp_tree.h:58
#18 viz::BspTree::WalkInOrderVisitNodes<viz::BspWalkActionDrawPolygon> (this=<optimized out>, action_handler=<optimized out>, node=0xc3c06620,
first_child=<optimized out>, second_child=<optimized out>, first_coplanars=..., second_coplanars=...)
at ../../components/viz/service/display/bsp_tree.h:77
#19 0xca6ce254 in viz::BspTree::WalkInOrderVisitNodes<viz::BspWalkActionDrawPolygon> (this=<optimized out>, action_handler=<optimized out>,
node=<optimized out>, first_child=0xc3c06508, second_child=<optimized out>, first_coplanars=..., second_coplanars=...)
at ../../components/viz/service/display/bsp_tree.h:92
#20 0xca6ce254 in viz::BspTree::WalkInOrderVisitNodes<viz::BspWalkActionDrawPolygon> (this=<optimized out>, action_handler=<optimized out>,
node=<optimized out>, first_child=0xc3c04848, second_child=<optimized out>, first_coplanars=..., second_coplanars=...)
at ../../components/viz/service/display/bsp_tree.h:92
#21 0xca6ce254 in viz::BspTree::WalkInOrderVisitNodes<viz::BspWalkActionDrawPolygon> (this=<optimized out>, action_handler=<optimized out>,
node=<optimized out>, first_child=0xc4d66758, second_child=<optimized out>, first_coplanars=..., second_coplanars=...)
at ../../components/viz/service/display/bsp_tree.h:92
#22 0xca6ce254 in viz::BspTree::WalkInOrderVisitNodes<viz::BspWalkActionDrawPolygon> (this=<optimized out>, action_handler=<optimized out>,
node=<optimized out>, first_child=0xc4d66780, second_child=<optimized out>, first_coplanars=..., second_coplanars=...)
at ../../components/viz/service/display/bsp_tree.h:92
---Type <return> to continue, or q <return> to quit---
#23 0xca6cc924 in viz::DirectRenderer::FlushPolygons (this=0xc4243e00, poly_list=<optimized out>, render_pass_scissor=...,
use_render_pass_scissor=<optimized out>) at ../../components/viz/service/display/bsp_tree.h:92
#24 0xca6ccbfa in viz::DirectRenderer::DrawRenderPass (this=0xc4243e00, render_pass=<optimized out>)
at ../../components/viz/service/display/direct_renderer.cc:548
#25 0xca6cc52c in viz::DirectRenderer::DrawRenderPassAndExecuteCopyRequests (this=0xc4243e00, render_pass=<optimized out>)
at ../../components/viz/service/display/direct_renderer.cc:455
#26 0xca6cc398 in viz::DirectRenderer::DrawFrame (this=0xc4243e00, render_passes_in_draw_order=<optimized out>, device_scale_factor=<optimized out>,
device_viewport_size=...) at ../../components/viz/service/display/direct_renderer.cc:331
#27 0xca6cf264 in viz::Display::DrawAndSwap (this=0xc442af80) at ../../components/viz/service/display/display.cc:351
#28 0xca6d1888 in viz::DisplayScheduler::DrawAndSwap (this=0xcacea400) at ../../components/viz/service/display/display_scheduler.cc:202
#29 0xca6d0d84 in viz::DisplayScheduler::OnBeginFrameDeadline (this=0xcacea400) at ../../components/viz/service/display/display_scheduler.cc:489
#30 0xcf364960 in base::OnceCallback<void ()>::Run() && (this=<optimized out>) at ../../base/callback.h:65
#31 base::debug::TaskAnnotator::RunTask (this=0xea5731a8, queue_function=0xcf3e83eb "MessageLoop::PostTask", pending_task=0xfff87a58)
at ../../base/debug/task_annotator.cc:55
#32 0xcf379d92 in base::MessageLoop::RunTask (this=0xc8faf180, pending_task=0xfff87a58) at ../../base/message_loop/message_loop.cc:399
#33 0xcf37a0fe in base::MessageLoop::DeferOrRunPendingTask (
pending_task=From ScheduleBeginFrameDeadline()@../../components/viz/service/display/display_scheduler.cc:460 = {...}, this=<optimized out>)
at ../../base/message_loop/message_loop.cc:411
#34 base::MessageLoop::DoWork (this=0xc8faf180) at ../../base/message_loop/message_loop.cc:455
#35 0xcf37a84e in base::MessagePumpForUI::DoRunLoopOnce (this=<optimized out>, env=<optimized out>, obj=..., delayed=<optimized out>)
at ../../base/message_loop/message_pump_android.cc:60
#36 Java_org_chromium_base_SystemMessageHandler_nativeDoRunLoopOnce (env=<optimized out>, jcaller=<optimized out>, nativeMessagePumpForUI=<optimized out>,
delayed=<optimized out>) at gen/base/base_jni_headers/base/jni/SystemMessageHandler_jni.h:46
#37 0xd168d12e in ?? ()
,
Jan 22 2018
I managed to get a snapshot of scheduler state when the hang happens:
Scheduler::PrintDebugInfo
running_ = 0
rebuild_scheduling_queue_ = 0
sequences_ = {
seq_id = 2, {
sequence_id_ = 2
enabled_ = 1
running_state_ = IDLE
scheduling_state_ = {seq_id = 2, pri = High, order_num = 2142}
default_priority_ = High
current_priority_ = High
tasks_ = []
wait_fences_ = {}
waiting_priority_counts_ = {
High : 0
Normal : 0
Low : 0
}
}
seq_id = 6, {
sequence_id_ = 6
enabled_ = 1
running_state_ = IDLE
scheduling_state_ = {seq_id = 6, pri = Low, order_num = 2147}
default_priority_ = Low
current_priority_ = Low
tasks_ = []
wait_fences_ = {}
waiting_priority_counts_ = {
High : 0
Normal : 0
Low : 0
}
}
seq_id = 7, {
sequence_id_ = 7
enabled_ = 1
running_state_ = IDLE
scheduling_state_ = {seq_id = 7, pri = Normal, order_num = 2148}
default_priority_ = Normal
current_priority_ = Normal
tasks_ = []
wait_fences_ = {}
waiting_priority_counts_ = {
High : 0
Normal : 0
Low : 0
}
}
}
scheduling_queue_ = []
It looks like the task gets lost or doesn't get requeued.
,
Jan 23 2018
OTOH WaitForGetOffset/Token are out of order messages that aren't enqueued in the scheduler
,
Jan 23 2018
service side definitely sees the wait for get offset:
command_buffer_id_ = 4294967297
sequence_id_ = 2
cmd_buf_state: {get_offset = generation = 48206, wait_for_get_offset_: start = 15756, end = 15749, set_get_buffer_count = 1}
,
Jan 23 2018
oops pasted that wrong
command_buffer_id_ = 4294967297
sequence_id_ = 2
cmd_buf_state: {get_offset = 15755, token = 47784, release_count = 18, error = 0, context_lost_reason = 2, generation = 48206, set_get_buffer_count = 1}
wait_for_get_offset_: start = 15756, end = 15749, set_get_buffer_count = 1
,
Jan 23 2018
Hmm, context_lost_reason = 2? That sounds suspicious, and a plausible reason why our bookkeeping is getting messed up.
,
Jan 23 2018
This happens because of incorrect last_put_sent_ tracking in CommandBufferHelper. Specifically, when we do a Flush followed by an OrderingBarrier, last_put_sent_ is equal to the put offset of the flush. If we wrap around and come back to the same put offset by chance, and have to do a WaitForGetOffsetInRange because of insufficient space, the flush is skipped because the new put offset matches last_put_sent_. Example from logging: Flush put = 13964, last_put_sent = 13964 ... OrderingBarrier put = 13970 (flushed to service by another context or verify sync token etc.), last_put_sent is still 13964 ... Wrap around put = 0, cached get = 13970 ... WaitForAvailableEntries count = 20, put = 13964, cached get = 13970 Flush put = 13964, skipped because put == last_put_sent WaitForGetOffsetInRange start = 13985, end = 13964 (means we're waiting for get offset to be outside this range) ... Service goes idle because there's no flush, client hangs because WaitForGetOffsetInRange doesn't return P.S. context_lost_reason = 2 is kUnknown which is the default value
,
Jan 24 2018
Interestingly, might have seen this on Linux in this tryjob: https://ci.chromium.org/buildbot/tryserver.chromium.linux/linux_optional_gpu_tests_rel/18478 WebglConformance_deqp_framework_opengl_simplereference_referencecontext timed out and we crashed the renderer process and symbolized the stack: https://chromium-swarm.appspot.com/task?id=3b3e1d55d6e05e10&refresh=10&show_raw=1 The main thread is waiting for BeginMainFrame to finish: Thread 0 0 libpthread-2.19.so + 0xc404 1 chrome!TimedWaitUntil [waitable_event_posix.cc : 223 + 0xc] 2 chrome!Wait [waitable_event_posix.cc : 154 + 0x5] 3 chrome!Wait [completion_event.h : 43 + 0x8] 4 chrome!BeginMainFrame [proxy_main.cc : 326 + 0x8] 5 chrome!MakeItSo<void (cc::ProxyMain::*)(std::__1::unique_ptr<cc::BeginMainFrameAndCommitState, std::__1::default_delete<cc::BeginMainFrameAndCommitState> >), base::WeakPtr<cc::ProxyMain>, std::__1::unique_ptr<cc::BeginMainFrameAndCommitState, std::__1::default_delete<cc::BeginMainFrameAndCommitState> > > [bind_internal.h : 211 + 0x3] 6 chrome!RunOnce [bind_internal.h : 368 + 0xb] 7 chrome!RunTask [callback.h : 65 + 0x3] 8 chrome!ProcessTaskFromWorkQueue [task_queue_manager.cc : 543 + 0x13] 9 chrome!DoWork [task_queue_manager.cc : 343 + 0xf] 10 chrome!Run [bind_internal.h : 211 + 0x3] 11 chrome!RunTask [callback.h : 65 + 0x3] ... but the compositor thread is waiting in the command buffer client code: Thread 8 0 libpthread-2.19.so + 0xc404 1 chrome!TimedWaitUntil [waitable_event_posix.cc : 223 + 0xc] 2 chrome!Wait [waitable_event_posix.cc : 154 + 0x5] 3 chrome!Send [gpu_channel_host.cc : 83 + 0x5] 4 chrome!Send [command_buffer_proxy_impl.cc : 715 + 0x8] 5 chrome!WaitForTokenInRange [command_buffer_proxy_impl.cc : 338 + 0xb] 6 chrome!WaitForToken [cmd_buffer_helper.cc : 290 + 0x5] 7 chrome!FreeOldestBlock [ring_buffer.cc : 43 + 0x5] 8 chrome!Alloc [ring_buffer.cc : 70 + 0x8] 9 chrome!AllocUpTo [transfer_buffer.cc : 170 + 0x5] 10 chrome!Reset [transfer_buffer.cc : 235 + 0x9] 11 chrome!TexSubImage2D [transfer_buffer.h : 167 + 0x8] 12 chrome!CopyToResource [layer_tree_resource_provider.cc : 549 + 0x25] 13 chrome!CreateUIResource [layer_tree_host_impl.cc : 4415 + 0x8] 14 chrome!ProcessUIResourceRequestQueue [layer_tree_impl.cc : 1654 + 0xb] 15 chrome!ActivateSyncTree [layer_tree_host_impl.cc : 2349 + 0x5] 16 chrome!ScheduledActionActivateSyncTree [proxy_impl.cc : 619 + 0x9] 17 chrome!ProcessScheduledActions [scheduler.cc : 746 + 0x6] 18 chrome!NotifyReadyToActivate [proxy_impl.cc : 339 + 0x5] 19 chrome!CheckAndIssueSignals [tile_manager.cc : 1388 + 0x5] 20 chrome!Notify [callback.h : 94 + 0x3] 21 chrome!Run [bind_internal.h : 211 + 0x14] 22 chrome!RunTask [callback.h : 65 + 0x3] 23 chrome!ProcessTaskFromWorkQueue [task_queue_manager.cc : 543 + 0x13] ... This will be a good reliability fix overall.
,
Jan 24 2018
fwiw while writing a test for this, I realized that another condition must be satisfied to trigger this bug: we must hit the "fill end of cmd buffer with nops" case. If we manage to fill the buffer exactly, the next flush will wrap put_ around to 0 which doesn't match last_put_sent_ unless we fill the command buffer fully every time
,
Jan 25 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/5acdc814d05bc2f6266fc62e036aafe3839e20d8 commit 5acdc814d05bc2f6266fc62e036aafe3839e20d8 Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Thu Jan 25 01:56:07 2018 gpu: Do not skip cmd buffer helper lazy flush after ordering barrier. Command buffer helper supports "lazy" flush which is used for flushing when we run out of space, before calling WaitForGetOffsetInRange. The helper keeps track of last_put_sent_ to avoid redundant flushes. When a flush is followed by an ordering barrier, last_put_sent_ is not updated. If the buffer wraps around to the same put offset as the last explicit flush and it runs out of space at the same time, the flush before WaitForGetOffset is skipped. The service goes idle since the client hasn't flushed, and the client hangs in WaitForGetOffset. To solve this, the helper keeps track of the last ordering barrier's put offset and if that doesn't match the last flush put offset, an explicit flush is performed. A unit test is included which simulates the conditions when this bug happens. R=piman,kbr BUG= 798400 TEST=gpu_unittests CommandBufferHelperTest.TestWrapAroundAfterOrderingBarrier Cq-Include-Trybots: master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Change-Id: I8771b5b527a737b54f0f82d793b35194bfc93c29 Reviewed-on: https://chromium-review.googlesource.com/882499 Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Reviewed-by: Kenneth Russell <kbr@chromium.org> Reviewed-by: Antoine Labour <piman@chromium.org> Cr-Commit-Position: refs/heads/master@{#531781} [modify] https://crrev.com/5acdc814d05bc2f6266fc62e036aafe3839e20d8/gpu/command_buffer/client/cmd_buffer_helper.cc [modify] https://crrev.com/5acdc814d05bc2f6266fc62e036aafe3839e20d8/gpu/command_buffer/client/cmd_buffer_helper.h [modify] https://crrev.com/5acdc814d05bc2f6266fc62e036aafe3839e20d8/gpu/command_buffer/client/cmd_buffer_helper_test.cc
,
Jan 25 2018
Fixed in commit 5acdc814... initially landed in 66.0.3331.0 Asking for M65 merge
,
Jan 26 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/1225f44ed3dc4b9b2b3e99c5751a1f356260125c commit 1225f44ed3dc4b9b2b3e99c5751a1f356260125c Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Fri Jan 26 01:36:24 2018 gpu: Do not skip cmd buffer helper lazy flush after ordering barrier. Command buffer helper supports "lazy" flush which is used for flushing when we run out of space, before calling WaitForGetOffsetInRange. The helper keeps track of last_put_sent_ to avoid redundant flushes. When a flush is followed by an ordering barrier, last_put_sent_ is not updated. If the buffer wraps around to the same put offset as the last explicit flush and it runs out of space at the same time, the flush before WaitForGetOffset is skipped. The service goes idle since the client hasn't flushed, and the client hangs in WaitForGetOffset. To solve this, the helper keeps track of the last ordering barrier's put offset and if that doesn't match the last flush put offset, an explicit flush is performed. A unit test is included which simulates the conditions when this bug happens. R=​piman,kbr BUG= 798400 TEST=gpu_unittests CommandBufferHelperTest.TestWrapAroundAfterOrderingBarrier Cq-Include-Trybots: master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Change-Id: I8771b5b527a737b54f0f82d793b35194bfc93c29 Reviewed-on: https://chromium-review.googlesource.com/882499 Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Reviewed-by: Kenneth Russell <kbr@chromium.org> Reviewed-by: Antoine Labour <piman@chromium.org> Cr-Original-Commit-Position: refs/heads/master@{#531781}(cherry picked from commit 5acdc814d05bc2f6266fc62e036aafe3839e20d8) Reviewed-on: https://chromium-review.googlesource.com/887968 Reviewed-by: Sunny Sachanandani <sunnyps@chromium.org> Cr-Commit-Position: refs/branch-heads/3325@{#106} Cr-Branched-From: bc084a8b5afa3744a74927344e304c02ae54189f-refs/heads/master@{#530369} [modify] https://crrev.com/1225f44ed3dc4b9b2b3e99c5751a1f356260125c/gpu/command_buffer/client/cmd_buffer_helper.cc [modify] https://crrev.com/1225f44ed3dc4b9b2b3e99c5751a1f356260125c/gpu/command_buffer/client/cmd_buffer_helper.h [modify] https://crrev.com/1225f44ed3dc4b9b2b3e99c5751a1f356260125c/gpu/command_buffer/client/cmd_buffer_helper_test.cc
,
Jan 26 2018
,
Jan 26 2018
I thought I saw a merge approved email for this. Might've got it confused with another bug. In any case, this has baked in last canary so the merge would be auto-approved tomorrow.
,
Jan 29 2018
Issue 805356 has been merged into this issue.
,
Feb 1 2018
When can I expect this fix to be live? At what version?
,
Feb 1 2018
It should be in the first M65 beta release, coming very soon, and M65 stable, early March.
,
Mar 16 2018
|
|||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||
Comment 1 by pnangunoori@chromium.org
, Jan 3 2018