Issue metadata
Sign in to add a comment
|
Security: Information disclosure via "memory_instrumentation::mojom::Coordinator" interface in "resource_coordinator" service |
||||||||||||||||||||||||
Issue descriptionVULNERABILITY DETAILS The "memory_instrumentation::mojom::Coordinator" mojo interface is exposed by the "resource_coordinator" service, running under the browser process. The interface requires the "app" capability (https://cs.chromium.org/chromium/src/services/resource_coordinator/manifest.json?l=8), which is provided to the "content_renderer" service (therefore allowing it to bind to the remote interface). The "Coordinator" interface exposes several commands, including the ability to register new client processes from which "memory dumps" can be requested, and producing dumps from the currently registered clients. The "memory dumps" do not contain the contents of the process's memory, but rather the metadata describing the currently mapped memory regions. On Linux this information is acquired using "/proc/<pid>/smaps". No access control checks are done to ensure that unprivileged processes (such as the renderer) cannot acquire memory dumps from privileged processes (such as the browser). As a result, the renderer process can acquire the memory maps for all other registered processes (such as the "Browser", "GPU" and "Utility" processes), including the addresses of each mapped region, its protection masks, and the name of the backing file (if any). This allows compromised renderers to bypass ASLR when attacking any of these other processes. Furthermore, this might leak some information in case the name of a mapped file in the aforementioned processes contains data which should not be accessible to the renderer. I believe the above interface should be split in two; an "Agent Registry" (allowing clients to register as memory dump sources) and a "Coordinator" (allowing clients to acquire memory dumps from the aforementioned sources). The former interface should be made accessible to all services with the "app" capability, while the latter would only be exposed to privileged callers (via some other capability). This approach is already utilised by the "tracing::mojom::AgentRegistry" and "tracing::mojom::Coordinator" interfaces exposed by the "resource_coordinator" service. VERSION Chromium 64.0.3282.0 64-bit Revision dd12859a9c856c6919cedf6c35d13b8b22af94e1-refs/heads/master@{#520743} OS Linux 4.4.0-97-generic REPRODUCTION CASE I'm attaching a small patch that adds code to the renderer process which binds to the aforementioned interface and requests a global memory dump from the Coordinator. Applying the patch and navigating to any page should result in the renderer process outputting the memory maps for all processes. This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public.
,
Dec 5 2017
,
Dec 6 2017
,
Dec 6 2017
,
Dec 20 2017
oysteine: Uh oh! This issue still open and hasn't been updated in the last 14 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers? If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one? If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started. Thanks for your time! To disable nags, add the Disable-Nags label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Dec 20 2017
hjd: Would this be you or someone adjacent?
,
Dec 20 2017
cc: the TL primiano (OOO till Jan), and laitm who also works on this code. Splitting the interface will not be trivial and most of the people who know this code are/will shortly be OOO until at least Jan 2nd. As a temporary and quicker fix we could add an explicit check in coordinator_impl.h (lives in the browser process) not to return smaps unless the request comes from the browser process? This would probably break the out of process heap profiler since I think it lives its own process and consumes the smaps (I will speak to erikchen to check). I think the other consumers all live in the browser process and should be unaffected. (90 days from Dec 5 is March 5 if like me you needed help with the math)
,
Dec 20 2017
Lalit and I discussed, some consumers (possibly from unprivileged processes) need to be able to trigger memory dumps. However these consumers don't need/want smaps. The only consumer who does currently is OOP Heap Profiling so we think we will try to move just the GetVmRegionsForHeapProfiler method onto a new interface (only accessible by a new capability) then give the profiling service that capability rather than trying to split by producer/consumer. Lalit is trying this now.
,
Dec 20 2017
Re #0: Thanks for the report. The split proposal makes sense, although there are some subtle cases, as explained in #7 and #8, that make this a bit more complex. My mental model is: - RequestGlobalMemoryDumpAndAppendToTrace(): this is harmless because doesn't return any result, so this could stay in the world-accessible interface. Realistically, the renderer process needs to access this because this command comes from the devtools backend in the renderer. - RequestGlobalMemoryDump() this instead should be usable only by the browser process and moved into a dedicated privileged interface as suggested. - GetVmRegionsForHeapProfiler() is a bit trickier, because is required by a process != heap profiler. However, I think that the heap profiler has its own service (see chrome/profiling/profiling_manifest.json) (+erikchen to confirm that's the only thing that requires the GetVmRegionsForHeapProfiler()) I think that what we need there is yet another interface which when whitelists only profiling::mojom::ProfilingService. So overall, looks like we should have 3 interfaces: Unprivileged (everybody can access): this has the registry and the RequestGlobalMemoryDumpAndAppendToTrace() method. Privileged (only browser process can access): this has the RequestGlobalMemoryDump() HeapProfiler (only profiling::mojom::ProfilingService can access): this has the GetVmRegionsForHeapProfiler() So I think it's solvable, with some refactoring. Timeline wise, we are all out for new years holidays. But given that we have been (unfortunately) in this state until now, I think this can wait until January.
,
Jan 4 2018
hjd: Uh oh! This issue still open and hasn't been updated in the last 14 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers? If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one? If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started. Thanks for your time! To disable nags, add the Disable-Nags label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Jan 4 2018
Lait wrote a fix before the holidays here: https://chromium-review.googlesource.com/c/chromium/src/+/836896 Waiting till Lait and Primiano are back tomorrow to decide if we should land.
,
Jan 8 2018
CC +ajwong as I just added him as a reviewer to the CL that fixes this
,
Jan 12 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/b44e68087804e6543a99c87076ab7648d11d9b07 commit b44e68087804e6543a99c87076ab7648d11d9b07 Author: Lalit Maganti <lalitm@chromium.org> Date: Fri Jan 12 20:56:57 2018 memory-infra: split up memory-infra coordinator service into two This allows for heap profiler to use its own service with correct capabilities and all other instances to use the existing coordinator service. Bug: 792028 Change-Id: I84e4ec71f5f1d00991c0516b1424ce7334bcd3cd Reviewed-on: https://chromium-review.googlesource.com/836896 Commit-Queue: Lalit Maganti <lalitm@chromium.org> Reviewed-by: Daniel Cheng <dcheng@chromium.org> Reviewed-by: oysteine <oysteine@chromium.org> Reviewed-by: Albert J. Wong <ajwong@chromium.org> Reviewed-by: Hector Dearman <hjd@chromium.org> Cr-Commit-Position: refs/heads/master@{#529059} [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/chrome/profiling/profiling_manifest.json [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/chrome/profiling/profiling_service.cc [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/chrome/profiling/profiling_service.h [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/manifest.json [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/memory_instrumentation/coordinator_impl.cc [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/memory_instrumentation/coordinator_impl.h [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/public/cpp/memory_instrumentation/memory_instrumentation.cc [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/public/cpp/memory_instrumentation/memory_instrumentation.h [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/public/cpp/memory_instrumentation/tracing_integration_unittest.cc [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/public/interfaces/memory_instrumentation/memory_instrumentation.mojom [modify] https://crrev.com/b44e68087804e6543a99c87076ab7648d11d9b07/services/resource_coordinator/resource_coordinator_service.cc
,
Jan 26 2018
Was there more to do here? If not I'll close.
,
Jan 26 2018
elawrence@ can I ask somebody from security take a look to the new state and check that this is good to go as per #13 ?
,
Jan 27 2018
Can someone refresh my memory on why the Request*Dump*() methods are still on Coordinator? Reading the implementation and comments carefully, I think it's OK (since the comments claim that OSMemDump won't include the vm regions if not requested via HeapProfilerHelper), but it's a bit hard to follow.
,
Jan 29 2018
We need them because any process can request a memory dump because of devtools. Ideally we'd get rid of the VM_REGIONS_ONLY dump type but as we use it for other purposes, we can't do that for now. In any case, we reject all use of VM_REGIONS_ONLY on the coordinator unless it's coming through the HeapProfilerHelper.
,
Jan 29 2018
Re #16: the current split of services 1) Coordinator: acts both as registry and as an endpoint to request dumps that don't disclose mmaps. 2) HeapProfilerHelper: endpoint that discloses mmaps We went for this split rather than the initially proposed 3-way split (Registry, DumpsWithoutMmaps, DumpsWithMmaps) because that was extra code complexity, binary size and, more importantly, stability risk without a clear benefit. We still need dumps (w/o mmaps) to be accessible from the renderers for devtools.
,
Feb 8 2018
,
Feb 21 2018
Hi, The 90-day period for this vulnerability is due to expire on March 5 2018. Could you please let me know whether the issue will be fixed before the deadline is reached? Otherwise, is the fix expected to be available within a 14 day period from the original 90-day deadline? Thanks! Gal.
,
Feb 21 2018
The CL in #13 landed in 65.0.3320.0 and does not appear to have been merged anywhere. 65 Stable is slated for release the week of March 6th.
,
Feb 21 2018
#13 is a bit of a large refactoring and I'd be a bit worried about merging that into something that is already in beta and about to make it to stable. Can we do some risk evaluation here? We have been in this state for a while I think, and this is not exploitable alone (doesn't give any privileges or run any untrusted code) but could be used to get ASLR info. Should we really risk to destabilize a release given the situation?
,
Mar 2 2018
Hey, just a quick reminder that the bug is currently scheduled to be derestricted on Monday (March 5th). Could you please let me know whether the fix will be merged into M65 stable? Thanks!
,
Mar 2 2018
Hi laginimaineb@. I can confirm that the fix from #13 will be included in 65 Stable, which is due to be released on Tuesday March 6th. If possible, would be good to allow an additional 7 days before disclosure, to give time for stable to be rolled out to a wider population.
,
Mar 6 2018
,
Mar 6 2018
,
Mar 6 2018
Thanks for the update! No problem, we'll wait a few more days before derestricting the issue to allow more users to update.
,
Apr 25 2018
,
May 4 2018
This bug has been closed for more than 14 weeks. Removing security view restrictions. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Nov 14
|
|||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||
Comment 1 by elawrence@chromium.org
, Dec 5 2017Labels: Security_Severity-Medium Security_Impact-Stable M-65 OS-Android OS-Chrome OS-Linux OS-Mac OS-Windows
Owner: oysteine@chromium.org
Status: Untriaged (was: Unconfirmed)