Issue metadata
Sign in to add a comment
|
Chrome 62 freezes whole page during performing CSS3 animations
Reported by
andr...@arabel.la,
Nov 6 2017
|
||||||||||||||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36 Steps to reproduce the problem: 1. Open this link https://codepen.io/mi5qms/pen/qVNYpN 2. Wait for render 3. Try to click on any button What is the expected behavior? Chrome shouldn't block page during performing CSS3 animations What went wrong? I have a web application written in React, and I experienced a problem with it after upgrading Chrome to version 62. In the application, I have a table with some data. 10 rows with 10 columns. The table has pagination. I have several animations on some elements in the table and pagination. And after changing page (via paginator) and data in the table is rendered, my tiny animation starts executing and then Chrome freezes whole page and I'm not able to click anywhere for a couple of seconds. When I removed all animation and transition I don't experience this problem. Also, I don't experience this problem in Firefox and Safari and Chrome 61. This is why I suppose that Chrome has a bug and improperly deals with CSS3 animations. I prepare an example of Codepen which shows this problem: Did this work before? Yes 61 Chrome version: 62.0.3202.75 Channel: stable OS Version: OS X 10.12.6 Flash Version:
,
Nov 7 2017
,
Nov 7 2017
,
Nov 8 2017
Able to reproduce this issue on reported version 62.0.3202.75 and latest Canary 64.0.3262.0 using win 10 and Mac 10.12.6 Below is the Manual bisect info: -------------------------------- Good Build: 60.0.3112.0 Bad Build: 61.0.3113.0 As per Comment#1, suspecting the same https://codereview.chromium.org/2890063002 @samans: Kindly take a look and please help us to reassign this issue to a right owner if not with respect to this change. Note: This issue is not reproducible on Ubuntu 14.04 Thanks.!
,
Nov 8 2017
,
Feb 14 2018
+ Ping - should the owner change?
,
Feb 14 2018
This bug sounds similar to crbug.com/771544 . I believe cc::Scheduler is not doing a good job prioritizing user input vs. animation. I'll cc more people.
,
Feb 19 2018
Assuming the mojo change is related, I wonder if something also changed around task runner assignments on the cc thread. In any case, the next step here is to take a trace[1] and see what's happening. [1] https://www.chromium.org/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs
,
Apr 3 2018
,
May 11 2018
,
May 14 2018
Took a trace on Linux, 68.0.3423.2 (Official Build) dev (64-bit), using the test page from #1. Trace is entirely right-click events; whilst tracing noted not too much lag, but definitely some right clicks had a noticeable pause between click and menu appearing. For trace options I went rendering + input + cc.debug.scheduler. I'm not sure who to assign this to to take a look, since samans@ just dropped this bug without comment.
,
May 14 2018
Attaching a shorter trace too, since I found the original one very laggy to work with in tracing tools. Same setup as above, but I only did 2-3 right clicks during the trace.
,
May 14 2018
Hopefully cc scheduler owners will have time to look into this. They're already cc'ed.
,
Jun 11 2018
Picking a cc/scheduler OWNER at random to assign to, because this bug clearly isn't getting traction despite being P1. It has now existed for >6 months. Brian - can you take a look at this or give some input as to the best next steps? Per #1 we have a solid bisect to samans@ using mojo for BeginFrames, but per #7 and #8 the current suspect is that the cc scheduler is not doing a correct prioritization.
,
Jun 18 2018
Reassigning to sunnyps@. Sunny - same question to you as to Brian :)
,
Jun 19 2018
I'm able to repro on Canary on Windows using the test page in #1. I had a brief look at the trace in #10, and it looked like main thread Commit was blocked for a long time. I thought it might be because the compositor is waiting for pending tree raster to complete in GPU process before allowing the commit (async GPU raster feature - enabled by GPU scheduler) but I didn't see a corresponding PendingTree waiting trace. I'll keep investigating.
,
Jun 19 2018
It's definitely related to async gpu raster. In attached trace there's a PendingTree::waiting trace that coincides with the commit trace. Also commands from the renderer don't seem to run on the GPU process for several frames at a time even though we're rastering new tiles every frame.
,
Jun 19 2018
,
Jun 19 2018
Captured another trace with extra trace events for async GPU raster (see TileManager::CheckPendingGpuWorkAndIssueSignals::RequireTiles/Callbacks). The required for activation / draw GPU pending work sets are always empty so it's not async GPU raster that causes this. Instead this seems to happen because we throw away pending tree work on every frame due to the animation, and thus never activate. I suspect this is the root cause of the BeginMainFrame hangs that we've been seeing since MainFrameBeforeActivation was rolled out. Not sure why this started happening in M62.
,
Jun 20 2018
Investigation so far: We see periodic calls to PrepareTiles but no DidFinishRunningRequiredForActivation/Draw/AllTasks callbacks. The nodes for running these callbacks are in the task graph and do post the callback to the compositor thread, but the weak ptrs are invalidated in the next PrepareTiles. This likely happens because the compositor thread is busy, and there's a race between the callbacks and the next PrepareTiles (part of draw). Therefore activation is delayed indefinitely. Async GPU raster isn't involved as there are no pending gpu work tiles or callbacks. PrepareTiles -> ScheduleTasks (compositor thread) -> TaskGraphRunner (worker thread pool) -> TaskSetFinishedTaskImpl -> DidFinishRunningRequiredForActivationTasks (compositor thread) Sending BeginFrames via mojo might have reduced a thread hop or PostTask for BeginFrame and made it run ahead of the callback.
,
Jun 20 2018
I fixed the issue with not getting the DidFinishRunningRequiredForActivationTasks callback by doing that in ScheduleTasks if required_for_activate_count = 0, but that doesn't fix the problem. The broader issue seems to be that we're running no tasks other than BeginFrame running for long durations. For example, this trace has only BeginFrame tasks inside it: Title ThreadControllerImpl::DoWork Category toplevel User Friendly Category other Start 7,220.168 ms Wall Duration 2,159.055 ms CPU Duration 2,149.628 ms Self Time 1.786 ms CPU Self Time 1.855 ms Args src_file "../../mojo/public/cpp/system/simple_watcher.cc" src_func "Notify" If we don't run any other tasks, we won't see any raster task completion callbacks or NotifyReadyToActivate calls either (since DidFinishRunning... schedules signals_check_notifier_ which is another PostTask).
,
Jun 20 2018
,
Jun 20 2018
Actually nvm my last comment there are other tasks in between BeginFrames. The BeginFrames take almost all of the time because it's animating thousands of divs.
,
Jun 21 2018
Even though other tasks get to run, we get a flood of BeginFrames that tick animations and then don't do anything because nothing updated. Each BeginFrame consumes about 7ms on my Z840 just for ticking all animations. There are over a thousand tiles required for activation. That compounded with the back to back BeginFrames delays activation by a lot.
An easy fix is to always PostTask in Scheduler::OnBeginFrameDerivedImpl like we do for missed begin frames:
missed_begin_frame_task_.Reset(base::Bind(
&Scheduler::BeginImplFrameWithDeadline, base::Unretained(this), args));
task_runner_->PostTask(FROM_HERE, missed_begin_frame_task_.callback());
With that change, the page is usable (still janky however) when scrolling, right clicking, or double clicking on text.
c#1 bisected the change to when we switched begin frames to mojo. That makes sense because with IPC we had PostTask from IO thread to compositor thread, but with mojo the compositor thread message loop polls for messages. It's not clear to me how mojo messages are prioritized with respect to other tasks.
Even with the fix, we should look at how to optimize this by not rastering 1000s of tiles.
I'll send CLs with the begin frame flood and tile manager callback fixes tomorrow.
,
Jun 22 2018
Re #25: for IPCs mojo posts tasks from IO thread to the target thread. It's possible to customise task runner by passing it to Bind() call when initialising mojo interfaces. I can't see from a trace whether it's working as intended or not, it would be nice to get a trace with "renderer.scheduler" and "mojom" categories enabled from Chrome which was built with "enable_mojo_tracing" gn argument. I was looking into prioritising input-related IPCs on the compositor thread, it seems that we might benefit from something similar here. Is it safe to assume that we always want to prioritise ProxyImpl::DidReceiveCompositorFrameAckOnImplThread over Scheduler::BeginFrame?
,
Jun 22 2018
Also I can't quite repro this -- when I open the link from #1, I see some CSS but no JS or html.
,
Jun 22 2018
The file attached to #1 has the necessary CSS and HTML inside.
,
Jun 28 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/89b548a0bae43ef770c8e82a0e396d08c0cc53d4 commit 89b548a0bae43ef770c8e82a0e396d08c0cc53d4 Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Thu Jun 28 21:55:42 2018 cc: Post task on mojo begin frame to mitigate begin frame flood Unlike PostTask from IO thread to compositor thread in Chrome IPC, mojo polls for messages on the compositor thread which means it can dequeue a large number of begin frame messages after the compositor thread has been busy for some time. All but the last begin frame cancels the previous begin frame, and is essentially a nop, but it still ticks animations. When a page has a large number of animations each begin frame can take a long time and push out other tasks such as tile manager callbacks stalling the pipeline. Throttling the begin frames in viz doesn't fully solve the problem because we have to allow at least two begin frames in flight for pipelining, and so the client can still process two begin frames back to back. Post a task to issue the begin frame to the source and its observers. The task is reset on every begin frame mojo message so only the last message is propagated to the source. TEST=AsyncLayerTreeFrameSinkTest.MultipleBeginFrames R=danakj BUG= 782002 Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel Change-Id: Ie98930a3078dde5ff99b42148596cf564c079220 Reviewed-on: https://chromium-review.googlesource.com/1111474 Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Reviewed-by: Antoine Labour <piman@chromium.org> Reviewed-by: Sadrul Chowdhury <sadrul@chromium.org> Reviewed-by: danakj <danakj@chromium.org> Cr-Commit-Position: refs/heads/master@{#571282} [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/cc/mojo_embedder/async_layer_tree_frame_sink.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/cc/mojo_embedder/async_layer_tree_frame_sink.h [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/cc/mojo_embedder/async_layer_tree_frame_sink_unittest.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/cc/trees/layer_tree_frame_sink.h [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/components/viz/common/frame_sinks/begin_frame_source.h [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/content/browser/renderer_host/compositor_impl_android.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/content/renderer/mus/renderer_window_tree_client.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/content/renderer/mus/renderer_window_tree_client.h [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/content/renderer/render_thread_impl.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/ui/aura/mus/mus_context_factory.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/ui/aura/mus/window_port_mus.cc [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/ui/aura/mus/window_port_mus.h [modify] https://crrev.com/89b548a0bae43ef770c8e82a0e396d08c0cc53d4/ui/compositor/host/host_context_factory_private.cc
,
Jun 29 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/5a7793c70583c57d485709a21c38bcc1145e272c commit 5a7793c70583c57d485709a21c38bcc1145e272c Author: Henrik Grunell <grunell@chromium.org> Date: Fri Jun 29 14:06:15 2018 Revert "cc: Post task on mojo begin frame to mitigate begin frame flood" This reverts commit 89b548a0bae43ef770c8e82a0e396d08c0cc53d4. Reason for revert: Likely to cause flakiness of test WebViewTest.SelectShowHide. Original change's description: > cc: Post task on mojo begin frame to mitigate begin frame flood > > Unlike PostTask from IO thread to compositor thread in Chrome IPC, mojo > polls for messages on the compositor thread which means it can dequeue a > large number of begin frame messages after the compositor thread has > been busy for some time. All but the last begin frame cancels the > previous begin frame, and is essentially a nop, but it still ticks > animations. When a page has a large number of animations each begin > frame can take a long time and push out other tasks such as tile manager > callbacks stalling the pipeline. > > Throttling the begin frames in viz doesn't fully solve the problem > because we have to allow at least two begin frames in flight for > pipelining, and so the client can still process two begin frames back to > back. > > Post a task to issue the begin frame to the source and its observers. > The task is reset on every begin frame mojo message so only the last > message is propagated to the source. > > TEST=AsyncLayerTreeFrameSinkTest.MultipleBeginFrames > R=danakj > BUG= 782002 > > Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel > Change-Id: Ie98930a3078dde5ff99b42148596cf564c079220 > Reviewed-on: https://chromium-review.googlesource.com/1111474 > Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> > Reviewed-by: Antoine Labour <piman@chromium.org> > Reviewed-by: Sadrul Chowdhury <sadrul@chromium.org> > Reviewed-by: danakj <danakj@chromium.org> > Cr-Commit-Position: refs/heads/master@{#571282} TBR=sadrul@chromium.org,danakj@chromium.org,fsamuel@chromium.org,sunnyps@chromium.org,piman@chromium.org Change-Id: I5f184c5bde6384aa8b40feb2941e259ace7d7ac3 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: 782002 Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel Reviewed-on: https://chromium-review.googlesource.com/1120305 Reviewed-by: Henrik Grunell <grunell@chromium.org> Commit-Queue: Henrik Grunell <grunell@chromium.org> Cr-Commit-Position: refs/heads/master@{#571462} [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/cc/mojo_embedder/async_layer_tree_frame_sink.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/cc/mojo_embedder/async_layer_tree_frame_sink.h [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/cc/mojo_embedder/async_layer_tree_frame_sink_unittest.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/cc/trees/layer_tree_frame_sink.h [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/components/viz/common/frame_sinks/begin_frame_source.h [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/content/browser/renderer_host/compositor_impl_android.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/content/renderer/mus/renderer_window_tree_client.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/content/renderer/mus/renderer_window_tree_client.h [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/content/renderer/render_thread_impl.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/ui/aura/mus/mus_context_factory.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/ui/aura/mus/window_port_mus.cc [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/ui/aura/mus/window_port_mus.h [modify] https://crrev.com/5a7793c70583c57d485709a21c38bcc1145e272c/ui/compositor/host/host_context_factory_private.cc
,
Jun 29 2018
,
Jul 3
,
Jul 9
Per comment on https://chromium-review.googlesource.com/1120305 the revert was done because the original CL caused Issue 859061.
,
Jul 21
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15 commit 6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15 Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Sat Jul 21 00:36:57 2018 cc: Throttle incoming begin frames in scheduler Unlike PostTask from IO thread to compositor thread in Chrome IPC, mojo polls for messages on the compositor thread which means it can dequeue a large number of begin frame messages after the compositor thread has been busy for some time. All but the last begin frame cancels the previous begin frame, and is essentially a nop, but it still ticks animations. When a page has a large number of animations each begin frame can take a long time and push out other tasks such as tile manager callbacks stalling the pipeline. Throttling the begin frames in viz doesn't fully solve the problem because we have to allow at least two begin frames in flight for pipelining, and so the client can still process two begin frames back to back. Throttling in AsyncLTFS causes issues with LTFS lifetime and ordering with respect to other messages. Saving incoming begin frame in scheduler and posting a task works and ensures that only one begin frame is outstanding at any time. R=danakj BUG= 782002 Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel Change-Id: I247a87ad7475d33f878a215ce87056d20482f88c Reviewed-on: https://chromium-review.googlesource.com/1130082 Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Reviewed-by: danakj <danakj@chromium.org> Cr-Commit-Position: refs/heads/master@{#577046} [modify] https://crrev.com/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15/cc/scheduler/scheduler.cc [modify] https://crrev.com/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15/cc/scheduler/scheduler.h [modify] https://crrev.com/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15/cc/scheduler/scheduler_state_machine.cc [modify] https://crrev.com/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15/cc/scheduler/scheduler_state_machine.h [modify] https://crrev.com/6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15/cc/scheduler/scheduler_unittest.cc
,
Jul 21
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5 commit a9dba4b8b1cdcce440ca53340ee5bd68a68978a5 Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Sat Jul 21 17:57:21 2018 Revert "cc: Throttle incoming begin frames in scheduler" This reverts commit 6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15. Reason for revert: Causes test failures e.g. https://chromium-swarm.appspot.com/task?id=3ed41ac799e41d10&refresh=10&show_raw=1 Original change's description: > cc: Throttle incoming begin frames in scheduler > > Unlike PostTask from IO thread to compositor thread in Chrome IPC, mojo > polls for messages on the compositor thread which means it can dequeue a > large number of begin frame messages after the compositor thread has > been busy for some time. All but the last begin frame cancels the > previous begin frame, and is essentially a nop, but it still ticks > animations. When a page has a large number of animations each begin > frame can take a long time and push out other tasks such as tile manager > callbacks stalling the pipeline. > > Throttling the begin frames in viz doesn't fully solve the problem > because we have to allow at least two begin frames in flight for > pipelining, and so the client can still process two begin frames back to > back. > > Throttling in AsyncLTFS causes issues with LTFS lifetime and ordering > with respect to other messages. > > Saving incoming begin frame in scheduler and posting a task works and > ensures that only one begin frame is outstanding at any time. > > R=danakj > BUG= 782002 > > Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel > Change-Id: I247a87ad7475d33f878a215ce87056d20482f88c > Reviewed-on: https://chromium-review.googlesource.com/1130082 > Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> > Reviewed-by: danakj <danakj@chromium.org> > Cr-Commit-Position: refs/heads/master@{#577046} TBR=danakj@chromium.org,sunnyps@chromium.org Change-Id: I0ce088ef781f93cc4b392c347f443771f84249a6 No-Presubmit: true No-Tree-Checks: true No-Try: true Bug: 782002 Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel Reviewed-on: https://chromium-review.googlesource.com/1146082 Reviewed-by: Sunny Sachanandani <sunnyps@chromium.org> Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Cr-Commit-Position: refs/heads/master@{#577082} [modify] https://crrev.com/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5/cc/scheduler/scheduler.cc [modify] https://crrev.com/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5/cc/scheduler/scheduler.h [modify] https://crrev.com/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5/cc/scheduler/scheduler_state_machine.cc [modify] https://crrev.com/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5/cc/scheduler/scheduler_state_machine.h [modify] https://crrev.com/a9dba4b8b1cdcce440ca53340ee5bd68a68978a5/cc/scheduler/scheduler_unittest.cc
,
Jul 23
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6 commit 1607d9124b4df6bf0175c3ec69b7aef6a981ddb6 Author: Sunny Sachanandani <sunnyps@chromium.org> Date: Mon Jul 23 23:43:07 2018 Reland "cc: Throttle incoming begin frames in scheduler" This is a reland of 6beef1e62e5f9c2074e7b08bdd05c20dd65cdd15 Removed incorrect DCHECKs that were added in the original change. Original change's description: > cc: Throttle incoming begin frames in scheduler > > Unlike PostTask from IO thread to compositor thread in Chrome IPC, mojo > polls for messages on the compositor thread which means it can dequeue a > large number of begin frame messages after the compositor thread has > been busy for some time. All but the last begin frame cancels the > previous begin frame, and is essentially a nop, but it still ticks > animations. When a page has a large number of animations each begin > frame can take a long time and push out other tasks such as tile manager > callbacks stalling the pipeline. > > Throttling the begin frames in viz doesn't fully solve the problem > because we have to allow at least two begin frames in flight for > pipelining, and so the client can still process two begin frames back to > back. > > Throttling in AsyncLTFS causes issues with LTFS lifetime and ordering > with respect to other messages. > > Saving incoming begin frame in scheduler and posting a task works and > ensures that only one begin frame is outstanding at any time. > > R=danakj > BUG= 782002 > > Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel > Change-Id: I247a87ad7475d33f878a215ce87056d20482f88c > Reviewed-on: https://chromium-review.googlesource.com/1130082 > Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> > Reviewed-by: danakj <danakj@chromium.org> > Cr-Commit-Position: refs/heads/master@{#577046} TBR=danakj BUG= 782002 Change-Id: Iec7bd9e421bdb372f101ecebc6cb71835dcb27bf Cq-Include-Trybots: luci.chromium.try:android_optional_gpu_tests_rel;master.tryserver.blink:linux_trusty_blink_rel Reviewed-on: https://chromium-review.googlesource.com/1147082 Commit-Queue: Sunny Sachanandani <sunnyps@chromium.org> Reviewed-by: Sunny Sachanandani <sunnyps@chromium.org> Cr-Commit-Position: refs/heads/master@{#577332} [modify] https://crrev.com/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6/cc/scheduler/scheduler.cc [modify] https://crrev.com/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6/cc/scheduler/scheduler.h [modify] https://crrev.com/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6/cc/scheduler/scheduler_state_machine.cc [modify] https://crrev.com/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6/cc/scheduler/scheduler_state_machine.h [modify] https://crrev.com/1607d9124b4df6bf0175c3ec69b7aef6a981ddb6/cc/scheduler/scheduler_unittest.cc
,
Jul 24
,
Dec 14
I tried this on M71, with the test case attached in c#1, the reported behavior is no longer there. Close as fixed. |
|||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||
Comment 1 by woxxom@gmail.com
, Nov 7 2017182 KB
182 KB View Download