New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 771805 link

Starred by 3 users

Issue metadata

Status: Fixed
Owner:
Closed: Oct 2017
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Mac
Pri: 3
Type: Bug
Hotlist-MemoryInfra



Sign in to add a comment

Deterministically reproducible renderer hang on shutdown.

Project Member Reported by erikc...@chromium.org, Oct 4 2017

Issue description

Long post. Please go to bottom ["Action Items"] for detailed requests.

################### Repro steps ###################
OS: macOS 10.12.6
gn args:
"""
 is_component_build = false                                                      
 use_goma = true                                                                 
 is_debug = false                                                                
 symbol_level = 1                                                                
 enable_nacl = false                                                             
 is_asan = true    
"""

Chrome ToT [54ec4f23d7db25d5f2ad90346705534f0ba1585e] with the follow CL patched in: https://chromium-review.googlesource.com/c/chromium/src/+/646047

Run:
"""
 ./out/gn/browser_tests --gtest_filter=SitePerProcess/TaskManagerOOPIFBrowserTest.SubframeHistoryNavigation/0
"""

Observe that test hangs, and then fails with timeout. 

################### Analysis of problem ###################

The test checks that task manager reports memory metrics for all processes. Under my CL, task manager uses memory_instrumentation to get memory metrics. Because a renderer is hung in shutdown, it's never able to respond to the memory_instrumentation coordinator, and all future memory_instrumentation callbacks are queued/stuck.

The main thread is waiting for the compositor thread:
"""
 thread #1, name = 'CrRendererMain', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
  * frame #0: 0x00007fff9d60c34a libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x00007fff9d60b797 libsystem_kernel.dylib`mach_msg + 55
    frame #2: 0x000000011a216105 Chromium Framework`::TimedWaitUntil() at waitable_event_mac.cc:145 [opt]
    frame #3: 0x000000011a215e39 Chromium Framework`::Wait() at waitable_event_mac.cc:105 [opt]
    frame #4: 0x000000011d96ccbb Chromium Framework`::Stop() [inlined] Wait at completion_event.h:43 [opt]
    frame #5: 0x000000011d96ccb3 Chromium Framework`::Stop() at proxy_main.cc:451 [opt]
    frame #6: 0x000000011d8757be Chromium Framework`::~LayerTreeHost() at layer_tree_host.cc:190 [opt]
    frame #7: 0x000000011d87694e Chromium Framework`::~LayerTreeHost() [inlined] ~LayerTreeHost at layer_tree_host.cc:167 [opt]
    frame #8: 0x000000011d876949 Chromium Framework`::~LayerTreeHost() at layer_tree_host.cc:167 [opt]
    frame #9: 0x0000000128774174 Chromium Framework`::~RenderWidgetCompositor() [inlined] operator() at memory:2233 [opt]
    frame #10: 0x0000000128774149 Chromium Framework`::~RenderWidgetCompositor() [inlined] reset at memory:2546 [opt]
    frame #11: 0x0000000128774123 Chromium Framework`::~RenderWidgetCompositor() [inlined] ~unique_ptr at memory:2500 [opt]
    frame #12: 0x0000000128774123 Chromium Framework`::~RenderWidgetCompositor() [inlined] ~unique_ptr at memory:2500 [opt]
    frame #13: 0x0000000128774123 Chromium Framework`::~RenderWidgetCompositor() [inlined] ~RenderWidgetCompositor at render_widget_compositor.cc:299 [opt]
    frame #14: 0x00000001287740c4 Chromium Framework`::~RenderWidgetCompositor() [inlined] ~RenderWidgetCompositor at render_widget_compositor.cc:299 [opt]
    frame #15: 0x00000001287740c4 Chromium Framework`::~RenderWidgetCompositor() at render_widget_compositor.cc:299 [opt]
    frame #16: 0x00000001289a09a0 Chromium Framework`::Close() [inlined] operator() at memory:2233 [opt]
    frame #17: 0x00000001289a0971 Chromium Framework`::Close() [inlined] reset at memory:2546 [opt]
    frame #18: 0x00000001289a094b Chromium Framework`::Close() at render_widget.cc:1563 [opt]
    frame #19: 0x000000012897c2b7 Chromium Framework`::OnClose() at render_widget.cc:742 [opt]
    frame #20: 0x00000001288b56f1 Chromium Framework`::FrameDetached() at render_frame_impl.cc:3238 [opt]
    frame #21: 0x00000001258caa99 Chromium Framework`::Detached() at LocalFrameClientImpl.cpp:335 [opt]
    frame #22: 0x00000001259d6987 Chromium Framework`::Detach() at Frame.cpp:83 [opt]
    frame #23: 0x0000000125a15201 Chromium Framework`::Detach() at LocalFrame.cpp:331 [opt]
    frame #24: 0x0000000125aa1ce7 Chromium Framework`::DetachChildren() at RemoteFrame.cpp:178 [opt]
    frame #25: 0x0000000125aa1735 Chromium Framework`::Detach() at RemoteFrame.cpp:87 [opt]
    frame #26: 0x0000000125aa1ce7 Chromium Framework`::DetachChildren() at RemoteFrame.cpp:178 [opt]
    frame #27: 0x0000000125aa1735 Chromium Framework`::Detach() at RemoteFrame.cpp:87 [opt]
    frame #28: 0x0000000128907d54 Chromium Framework`::Dispatch<content::RenderFrameProxy, content::RenderFrameProxy, void, void (content::RenderFrameProxy::*)()>() [inlined] DispatchToMethodImpl<content::RenderFrameProxy *, void (content::RenderFrameProxy::*)(), std::__1::tuple<>> at tuple.h:56 [opt]
...
"""

The compositor thread is waiting for a tile worker:
"""
  thread #11, name = 'Compositor'
    frame #0: 0x00007fff9d613bf2 libsystem_kernel.dylib`__psynch_cvwait + 10
    frame #1: 0x00007fff9d6ff7fa libsystem_pthread.dylib`_pthread_cond_wait + 712
    frame #2: 0x000000011a213d50 Chromium Framework`::Wait() at condition_variable_posix.cc:71 [opt]
    frame #3: 0x00000001286fd9af Chromium Framework`::WaitForTasksToFinishRunning() at categorized_worker_pool.cc:309 [opt]
    frame #4: 0x000000011d838820 Chromium Framework`::Shutdown() at tile_task_manager.cc:53 [opt]
    frame #5: 0x000000011d815a91 Chromium Framework`::FinishTasksAndCleanUp() at tile_manager.cc:400 [opt]
    frame #6: 0x000000011d8ac5aa Chromium Framework`::CleanUpTileManagerAndUIResources() at layer_tree_host_impl.cc:2556 [opt]
    frame #7: 0x000000011d8b9a53 Chromium Framework`::ReleaseLayerTreeFrameSink() at layer_tree_host_impl.cc:2591 [opt]
    frame #8: 0x000000011d95a0df Chromium Framework`::~ProxyImpl() at proxy_impl.cc:102 [opt]
    frame #9: 0x000000011d95a5be Chromium Framework`::~ProxyImpl() [inlined] ~ProxyImpl at proxy_impl.cc:92 [opt]
    frame #10: 0x000000011d95a5b9 Chromium Framework`::~ProxyImpl() at proxy_impl.cc:92 [opt]
    frame #11: 0x000000011d966e20 Chromium Framework`::DestroyProxyImplOnImplThread() [inlined] operator() at memory:2233 [opt]
    frame #12: 0x000000011d966df8 Chromium Framework`::DestroyProxyImplOnImplThread() [inlined] reset at memory:2546 [opt]
    frame #13: 0x000000011d966dd6 Chromium Framework`::DestroyProxyImplOnImplThread() at proxy_main.cc:61 [opt]
    frame #14: 0x000000011a0aa919 Chromium Framework`::RunTask() [inlined] Run at callback.h:64 [opt]
...
"""

The tile worker is waiting for the GpuMemoryThread:
"""
  thread #13, name = 'CompositorTileWorker2/28931'
    frame #0: 0x00007fff9d60c34a libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x00007fff9d60b797 libsystem_kernel.dylib`mach_msg + 55
    frame #2: 0x000000011a216105 Chromium Framework`::TimedWaitUntil() at waitable_event_mac.cc:145 [opt]
    frame #3: 0x000000011a215e39 Chromium Framework`::Wait() at waitable_event_mac.cc:105 [opt]
    frame #4: 0x00000001166d514c Chromium Framework`::CreateGpuMemoryBuffer() at client_gpu_memory_buffer_manager.cc:146 [opt]
    frame #5: 0x000000011d767b3d Chromium Framework`::AllocateGpuMemoryBuffer() at resource_provider.cc:1040 [opt]
    frame #6: 0x000000011d767962 Chromium Framework`::ConsumeTexture() [inlined] LazyAllocate at resource_provider.cc:1027 [opt]
    frame #7: 0x000000011d767902 Chromium Framework`::ConsumeTexture() at resource_provider.cc:1015 [opt]
    frame #8: 0x000000011d71163b Chromium Framework`::PlaybackOnWorkerThread() [inlined] RasterizeSource at gpu_raster_buffer_provider.cc:107 [opt]
    frame #9: 0x000000011d711626 Chromium Framework`::PlaybackOnWorkerThread() at gpu_raster_buffer_provider.cc:344 [opt]
    frame #10: 0x000000011d710845 Chromium Framework`::Playback() at gpu_raster_buffer_provider.cc:164 [opt]
    frame #11: 0x000000011d832a50 Chromium Framework`::RunOnWorkerThread() at tile_manager.cc:136 [opt]
...
"""

The GpuMemoryThread appears to be doing nothing.
"""
  thread #9, name = 'GpuMemoryThread'
    frame #0: 0x00007fff9d60c34a libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x00007fff9d60b797 libsystem_kernel.dylib`mach_msg + 55
    frame #2: 0x00007fff877b3874 CoreFoundation`__CFRunLoopServiceMachPort + 212
    frame #3: 0x00007fff877b2cf1 CoreFoundation`__CFRunLoopRun + 1361
    frame #4: 0x00007fff877b2544 CoreFoundation`CFRunLoopRunSpecific + 420
    frame #5: 0x000000011a136211 Chromium Framework`::DoRun() at message_pump_mac.mm:670 [opt]
    frame #6: 0x000000011a1325e1 Chromium Framework`::Run() at message_pump_mac.mm:179 [opt]
    frame #7: 0x000000011a1bdfec Chromium Framework`::Run() at run_loop.cc:118 [opt]
    frame #8: 0x000000011a275df1 Chromium Framework`::ThreadMain() at thread.cc:338 [opt]
    frame #9: 0x000000011a25f48b Chromium Framework`::ThreadFunc() at platform_thread_posix.cc:75 [opt]
    frame #10: 0x00007fff9d6fe93b libsystem_pthread.dylib`_pthread_body + 180
    frame #11: 0x00007fff9d6fe887 libsystem_pthread.dylib`_pthread_start + 286
    frame #12: 0x00007fff9d6fe08d libsystem_pthread.dylib`thread_start + 13
"""

Weirdly enough, after attaching with lldb and "continuing" the process, it then unhangs and manages to quit. I've attached a sample which shows the same thing.

################### Action Items ###################
@primiano:
Memory-Infra should be robust to non-responding renderers. I'm guessing that memory-infra currently requires hopping to threads which are potentially DOS-able by web contents. I don't see a way to work around this, so we should probably make the coordinator robust to timeouts.

@rockot:
Mojo is aware that something has gone wrong.
"""
[28073:45571:1004/160856.322750:ERROR:node_channel.cc(823)] Error on sending mach ports. Remote process is likely gone. Dropping message.28090
"""
Can we create an UMA metric to track the frequency of this? Alternatively, should we just shut down the connection on failure to broker handles/mach_ports?

@vmiura, @creis, @rsesek:
Not sure of the root cause. Could be ASAN implementation [seems unlikely?]. Could be related to compositing stack, or OOPIF, or a bug in WaitableEvent. Given the fact that attaching lldb and "continue" fixes the issue, I'm leaning towards a bug in WaitableEvent, or implementation of CreateGpuMemoryBuffer
 
hung_renderer_sample.txt
88.7 KB View Download
I was able to repro this, so I cut out waitable_event_mac.cc and switched back to waitable_event_posix.cc, and the problem persisted.
Re #1 about memory-infra.
I think the thing we can reasonably fix is to exclude hung renderers from the dump and continue with the other processes. Tracked bug for this is  Issue 727785 .
Instead, excluding individual MDPs and saving the rest of the dump for the process will be harder, because for reasons we allow some MDPs to take a look to the result injected by other MDPs in the same dump, so we can't easily make the dump process non-serial (i.e. we can't easily turn the sequence of hops into a "broadcast tasks to all task runniners and then join with timeout") 
I don't know anything about gpu architecture, but anecdotally renderer hangs like this (multiple threads stuck on a waitable event) are due to our code interlocking the task runners and getting in a deadlock.
A classical pattern I've seen in the past is something like: TaskRunner A blocking itself on a event.Wait(), waiting for a task posted on TaskRunner B to signal it. TaskRunner B waiting for a task posted on A to execute. --> deadlock.
Cc: ccameron@chromium.org piman@chromium.org ericrk@chromium.org
+ ericrk, ccameron, piman

No response from vmiura [and I've pinged], so ccing some other GPU folks. This seems like a potentially serious problem. 

Comment 5 by piman@chromium.org, Oct 6 2017

Cc: sadrul@chromium.org sunn...@chromium.org
Here's the flow:

1- ClientGpuMemoryBufferManager::CreateGpuMemoryBuffer posts a task to the GpuMemoryThread, calling AllocateGpuMemoryBufferOnThread, and waiting for the result (WaitableEvent).
2- ClientGpuMemoryBufferManager::AllocateGpuMemoryBufferOnThread sends a message to the browser process, expecting an async reply
3- This request should be served by a content::GpuClient, running on the browser IO thread, owned by the RenderProcessHostImpl (see RenderProcessHostImpl::CreateMusGpuRequest).
4- GpuClient::CreateGpuMemoryBuffer talks to BrowserGpuMemoryBufferManager::AllocateGpuMemoryBufferForChildProcess, which on Mac should go through the "native buffer" path (assuming default flags).
5- BrowserGpuMemoryBufferManager should then connect to the GPU process (instantiating it if necessary, this shouldn't be the case here unless it crashed), and sending an async message (see GpuProcessHost::CreateGpuMemoryBuffer)
6- the message is serviced on the GPU process IO thread (see GpuServiceImpl::CreateGpuMemoryBuffer).

At this point the callback flow should send back the response.

7- GpuServiceImpl::CreateGpuMemoryBuffer will run the mojo callback, which should be received by GpuProcessHost::OnGpuMemoryBufferCreated on the browser IO thread, calling back BrowserGpuMemoryBufferManager (unless GpuProcessHost was destroyed because of a GPU crash, in which case the callback should have been called by GpuProcessHost::SendOutstandingReplies with an error). 
8- in case of error because the gpu process died, BrowserGpuMemoryBufferManager::GpuMemoryBufferCreatedOnIO may retry creating it (going back to #5), but will otherwise call back GpuClient
9- GpuClient::OnCreateGpuMemoryBuffer should be called - unless the GpuClient was destroyed in the mean time, but see below - which will run the mojo callback.
10- ClientGpuMemoryBufferManager::OnGpuMemoryBufferAllocatedOnThread will be called on the GpuMemoryThread, which will signal the WaitableEvent and unlock CreateGpuMemoryBuffer.

If the GpuClient was destroyed in step 9, the mojo interface should be disconnected, and ClientGpuMemoryBufferManager::DisconnectGpuOnThread should be called which should signal the WaitableEvent and unlock CreateGpuMemoryBuffer.



Obviously, a bug in any of these steps where a message gets lost without ensuring the corresponding callback is called might cause a situation like this, but the fact that attaching a debugger make things go makes me think that it's not what happens. Possibly live-lock situations or thread descheduling/starvation or something that would just resolve itself after attaching the debugger got things to cool off.

Some additional data that would be helpful:
1- in the blocked situation, do we have any evidence that any of the other involved threads (browser IO, GPU IO) are busy?
2- what triggers signaling the WaitableEvent after attaching the debugger? Is that ClientGpuMemoryBufferManager::OnGpuMemoryBufferAllocatedOnThread or ClientGpuMemoryBufferManager::DisconnectGpuOnThread
Status: WontFix (was: Untriaged)
Thanks for the feedback!

If I let the test sit there for >1min, it eventually completes. The browser process is spinning at 1 full core, so we're looking at a live lock. Sampling the process returns almost no CPU usage [very suspicious]. 

Doing some more debugging, the browser process to try to compute its own resident memory. On macOS, this crawls the virtual address space. This causes ASAN to blow up. Thus explaining why the renderer never gets the callback it's expecting.
Owner: erikc...@chromium.org
Status: Started (was: WontFix)
Hm. There's more to this story. The hang [really like a 70s wait] still occurs, even with my fixes. I've confirmed we're no longer crawling the virtual address space. I'm going to see if this renderer hang happens even without my code.
Cc: samans@chromium.org
+ sadrul, samans

I can reproduce this issue non-deterministically on ToT [~50% of the time]. The easiest way to do so is to take the test in question, add a while(true){sleep(1000);} to the end, and then use "ps aux | grep Chromium" to see whether there's a renderer process sticking around that should have died.

I added a bunch of log statements. Here's what I observe:

In the renderer process:
 * The main thread is waiting in ProxyMain::Stop.
 * The impl thread is waiting in ProxyImpl::~ProxyImpl
 * The compositor tile worker is waiting in ClientGpuMemoryBufferManager::CreateGpuMemoryBuffer.
 * The IO thread and GpuMemoryThread are not blocked.

In the browser process:
 * The GpuClient receives the request, and sends a reply. 
 * The GpuClient is then destroyed.

In the renderer process:
 * The reply is not received, and the connection_error_handler is not called. This causes the deadlock.
 * AFAICT, the InterfacePtr is bound to the GpuMemoryThread, which is not blocked.

The only thing that's a little bit suspicious is that InterfacePtr is first bound on the main thread, and then transferred to the GpuMemoryThread via PassInterface. reillyg@ says this should work [as long as no messages are passed in the interim, which I've confirmed]. But then, there's a second InterfacePtr bound on the main thread to the same connector. 

I tried cloning the connector that gets used for the GpuMemoryThread InterfacePtr, but that doesn't make a difference. I'm out of ideas.


Comment 9 by roc...@chromium.org, Oct 11 2017

I agree with your assessment that the GpuPtr is bound on the GpuMemoryThread. I also don't see any obvious races between signaling and ClientGpuMemoryBufferManager shutdown that aren't accounted for. I do know there have been bugs here in the past though, and the logic is a bit subtle.

Just to rule out one remotely possible class of surprises, can you verify that at the time of the deadlock, |gpu_| is still bound?

I added a repeating timer to the GpuMemoryThread that checks on gpu_.is_bound(). I've confirmed that:

1) The GpuMemoryThread is not blocked.
2) gpu_ is bound.

*sigh*. This hole keeps digging deeper.

I think there may be several different issues. At least, I'm observing different symptoms when I dig down.

question: If the pipe is disconnected, a task is added to MultiplexRouter::tasks_. Is it possible to deadlock if MultiplexRouter::ProcessTasks doesn't process the message [let's say b/c something needs to touch the main thread]?

I think I'm seeing the following:
1) Browser process - GpuClient is Destroyed.
2) Renderer process calls Connector::HandleError:
3) Renderer process calls MultiplexRouter::OnPipeConnectionError:
4) Renderer process calls MultiplexRouter::ProcessNotifyErrorTask:
5) Renderer process ClientGpuMemoryBufferManager::DisconnectGpuOnThread is NOT called.
My repeating timer on the GpuMemoryThread shows that pending_allocation_waiters_ is 1, and gpu_ is bound. 

I'm also observing a different case where my timer on the GPUMemoryThread is indicating the gpu_ is not bound, but the process is still not shutting down.
Okay, I have a theory. First, observation of timing for context:

1) Renderer process calls CreateGpuMemoryBuffer on the GpuMemoryThread.
2) Browser process receives the call, processes it, and invokes the callback. [the callback never makes it to the renderer process].
3) The renderer process begins to shut down. Main thread blocked, compositor thread blocked, etc. GpuMemoryThread still unblocked.
4) Browser process destroys RenderProcessHostImpl and GpuClient
5) Browser process destroys the mojom::Gpu Binding. This calls Node::ClosePort on the underlying ports.
6) Renderer process receives Node::OnObserveClosure for the same ports.
7) Renderer process begins dispatching callbacks for Node::OnObserveClosure, via the Dispatch/Watcher system.

This is where it gets weird.
8) There is a GpuInfo connector bound to the GpuMemoryThread. It has two watchers: handle_watcher_ and peer_remoteness_tracker_. The latter receives a callback on the GpuMemoryThread. No callback is queued for the former, even though IsWatching() is true, and it's armed [confirmed with logging]. 
9) Making the following change fixes the problem. The appropriate callback is queued and run on the appropriate thread. There are no hung renderer processes.

Before:
"""
void Connector::WaitToReadMore() {    
...
389   MojoResult rv = handle_watcher_->Watch(                                       
390       message_pipe_.get(), MOJO_HANDLE_SIGNAL_READABLE,                                                                                                                                                                           
391       base::Bind(&Connector::OnWatcherHandleReady, base::Unretained(this)));   
"""

After:
"""
void Connector::WaitToReadMore() {    
...
389   MojoResult rv = handle_watcher_->Watch(                                       
390       message_pipe_.get(), MOJO_HANDLE_SIGNAL_READABLE | MOJO_HANDLE_SIGNAL_PEER_CLOSED,                                                                                                                                                                                                            
391       base::Bind(&Connector::OnWatcherHandleReady, base::Unretained(this)));   
"""

This makes logical sense. handler_watcher_ should be watching for the PEER_CLOSED signal. But this begs the question...how does this work at all without the change I proposed? I threw up a CL to run through the CQ: https://chromium-review.googlesource.com/c/chromium/src/+/714370. I'm going to sleep on this, and confer more with rockot@ tomorrow.

Comment 13 by w...@chromium.org, Oct 12 2017

Cc: siggi@chromium.org
+siggi, FYI, since he just pinged me about hangs w/ certain sites, under SyzyASAN Canary.
We believe this has been tracked down to OS X specific behavior around Mach
port transfer, so probably not likely to be related to any SyzyASAN hangs.
And just to follow-up from comment #12, most of that is correct, but |handle_watcher_| is fine as-is; it should not be watching for PEER_CLOSED.

The issue here was around how Mach port transfer is handled in Mojo internals, combined with pipe signaling semantics. A pipe with a closed peer remains readable as long as it knows there are still unreceived messages in transit.

Meanwhile, if a Mach port fails to transfer on the sending side for an outgoing message, we silently drop that message instead of sending it. The result in this case is that the renderer has been told to expect a message on the Gpu pipe, but that message is never going to arrive. It knows the peer is closed but still sees the pipe as readable, indefinitely.

The solution we discussed is to ensure that messages are still delivered in such cases, albeit with invalid mach ports attached.
Issue 762463 has been merged into this issue.
Project Member

Comment 17 by bugdroid1@chromium.org, Oct 13 2017

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/55090272269b587b30cc3eb277184644c4b16088

commit 55090272269b587b30cc3eb277184644c4b16088
Author: erikchen <erikchen@chromium.org>
Date: Fri Oct 13 18:41:28 2017

Messages with Mach Ports that fail to broker should still be sent.

Mojo enforces strict message ordering. Failing to send a message will prevent
all future messages from being read, and will also prevent the pipe disconnect
from being processed.

Bug:  771805 
Change-Id: I1d9db2b9d9f280eebd58bb8793df7b53eb4e77c1
Reviewed-on: https://chromium-review.googlesource.com/716857
Commit-Queue: Erik Chen <erikchen@chromium.org>
Reviewed-by: Ken Rockot <rockot@chromium.org>
Cr-Commit-Position: refs/heads/master@{#508765}
[modify] https://crrev.com/55090272269b587b30cc3eb277184644c4b16088/mojo/edk/system/mach_port_relay.cc
[modify] https://crrev.com/55090272269b587b30cc3eb277184644c4b16088/mojo/edk/system/mach_port_relay.h
[modify] https://crrev.com/55090272269b587b30cc3eb277184644c4b16088/mojo/edk/system/node_channel.cc
[modify] https://crrev.com/55090272269b587b30cc3eb277184644c4b16088/mojo/edk/system/node_controller.cc
[modify] https://crrev.com/55090272269b587b30cc3eb277184644c4b16088/mojo/public/cpp/system/platform_handle.cc

Status: Fixed (was: Started)

Sign in to add a comment