Security: renderer->extension privesc via sync |
|||||||||||||||||||||
Issue descriptionThis bug report shows how an attacker who has compromised a normal renderer process (from a website) can escalate into extension context with lots of permissions. Prerequisites: - the user is signed in - sync is turned on First, the attacker needs to obtain an oauth token with scope "https://www.googleapis.com/auth/chromesync". This can be performed using the following steps on the victim machine: - navigate to data:text/html,<a href="https://developers.google.com/oauthplayground/">start</a> - check the PID of the renderer process - click start - enter scope "https://www.googleapis.com/auth/chromesync" - press "authorize APIs" - verify that the PID is still the same - press "allow" - verify that the PID is still the same - press "exchange authorization code for tokens" - copy the access_token Because the PID didn't change, it is clear that all of these operations have been performed in the original renderer process. Therefore, if the original renderer process has been compromised by the attacker, the attacker can perform all of these steps without user interaction. The access_token that the attacker has obtained can then be used by the attacker to connect to Chrome's sync server and push arbitrary extensions to the victim machine. To do this, the attacker can build Chrome with a patch that lets the attacker connect to the victim's account using only the victim's email address and the access_token. The attacker patches Chrome as follows (the "XXXXXX" strings are placeholders for the victim's email address and the obtained access_token, you have to replace them with the real values): diff --git a/components/sync/engine_impl/sync_manager_impl.cc b/components/sync/engine_impl/sync_manager_impl.cc index 06720a0..be58182 100644 --- a/components/sync/engine_impl/sync_manager_impl.cc +++ b/components/sync/engine_impl/sync_manager_impl.cc @@ -495,10 +495,10 @@ void SyncManagerImpl::UpdateCredentials(const SyncCredentials& credentials) { DCHECK(!credentials.account_id.empty()); DCHECK(!credentials.sync_token.empty()); DCHECK(!credentials.scope_set.empty()); - cycle_context_->set_account_name(credentials.email); + cycle_context_->set_account_name(std::string("XXXXXXXXXX@gmail.com")); observing_network_connectivity_changes_ = true; - if (!connection_manager_->SetAuthToken(credentials.sync_token)) + if (!connection_manager_->SetAuthToken(std::string("ya29.XXXXXXXXXXXXXXXXX"))) return; // Auth token is known to be invalid, so exit early. scheduler_->OnCredentialsUpdated(); The attacker builds Chrome with the patch, runs it *on his own machine* and signs in *with his own credentials*. The attacker's patched Chrome will then synchronize its extension state with the victim's browser, and any extension the attacker installs from the webstore will also be installed in the victim's browser. Permission prompts during extension installation are only shown to the attacker, not the victim. This demonstrates that, using only a compromised renderer, it is possible to effectively escalate into extension context with more or less all privileges that a normal extension can have. VERSION Chrome Version: 54.0.2840.100 stable Operating System: Linux This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public.
,
Nov 19 2016
As far as I understand from looking at the code, CORS checks currently happen in the renderer only; no such checks have landed in the browser process so far (but they will be added for site isolation). Is that correct? If so, a compromised renderer would probably be able to get through the whole OAuth flow just using XHR IPC. (But I haven't tested that.) BTW: Another, although less urgent, concern here is IMO that even if the web->extension escalation is fixed, it should ideally also be ensured that an extension with https://* access can't use sync to escalate to more extension privileges.
,
Nov 22 2016
From my reading, you need a compromised renderer *and* physical access to a victim's machine in order to fully pull off this exploit (or at the very least, convincing the victim to follow the steps to generate the token and acquire it from them)? Assigning to creis@ for site isolation expertise - is palmer's suggestion appropriate here?
,
Nov 22 2016
> From my reading, you need a compromised renderer *and* physical access > to a victim's machine in order to fully pull off this exploit (or at > the very least, convincing the victim to follow the steps to generate > the token and acquire it from them)? No. All the user interaction happens inside the same renderer process in which the attacking site ran. Therefore, with a compromised renderer, you don't need any user interaction. No privileged browser UI is involved. Do you want me to send you a PoC for that (using disabled webkit/blink security checks)?
,
Nov 22 2016
Sorry, didn't see this before. This looks like a Sync vulnerability to me. zea@, can you help us understand this? We do not provide process isolation for https://developers.google.com or https://www.googleapis.com, and that's not something Chrome's process model should be trying to do. The same logic would apply for the Play Store or anything else with remote install capabilities-- Chrome shouldn't be trying to manage a registry of all of these. Instead, the fact that we have an OAuth API for Sync seems very scary to me. Why is this API provided? Can we require a reauth before granting it, so the user has to put in their password? (dcheng's idea) (Adding Navigation label as well, per comment 1. Tentatively marking P0, since this I think this is effectively a sandbox escape. Note also the 90 deadline for disclosure, targeting mid-Feb.)
,
Nov 22 2016
From what I can tell, this exploit is just using sync's normal HTTP API (the same one all Chrome clients talk to). It looks like once "https://developers.google.com/oauthplayground/" is compromised, you can get an access token to any service on behalf of that user? That seems like a problem much larger than Sync. Digging in more...
,
Nov 22 2016
> It looks like once "https://developers.google.com/oauthplayground/" is compromised, > you can get an access token to any service on behalf of that user? That seems like > a problem much larger than Sync. No, the place where the OAuth permission is actually granted is the OAuth endpoint on https://accounts.google.com/ .
,
Nov 22 2016
+Devlin: what are the plans for mitigating the permissions of extensions installed via Sync? I recall discussion about not enabling by default when installed remotely. Is that happening?
,
Nov 22 2016
'Installed remotely' can refer to two things: - Remote installation, where the user installed the extension from their phone. We sync the extension in a disabled state, and the user has to confirm they want it for it to be enabled. - Synced from another location. We sync the extension in an enabled state as well as the version number. If we pull down that version on a new machine, it will be enabled. It seems like this falls into the latter category - just as if I installed an extension on my laptop and it syncs to my desktop (or I sign in on a new machine); all my extensions will sync automatically and be enabled (assuming I have extension sync on and they were enabled before). AFAIK, it's always been like that - I don't recall ever syncing items in a purely disabled state...
,
Nov 22 2016
,
Nov 23 2016
+Pavel, who is more familiar with how Sync uses OAuth
,
Nov 23 2016
,
Nov 23 2016
,
Nov 23 2016
I think a more general version of this attack is that a compromised renderer can steal the user's password (e.g., showing a real google.com login page, or via the password manager) or even just their login cookie, then sign in from another machine and manually sync extensions. It's not clear to me whether OAuth tokens are required at all, or are just a convenient way to do it. Would just stealing the cookie be sufficient? Let's take a step back and think about the security model here. Chrome's current security model does not prevent compromised renderers from gaining access to the user's web accounts (see "The Security Architecture of the Chromium Browser" at http://seclab.stanford.edu/websec/chromium/). Compromised renderers are already treated as High severity vulnerabilities (P1), meaning we prioritize fixing them within 60 days (see https://www.chromium.org/developers/severity-guidelines). We should evaluate whether pushing extensions to the user's machine rates as even higher severity-- on the surface it does seem worse, but I'm not sure if it qualifies as Critical (P0). What are your thoughts, jschuh@? Note that Site Isolation, which is not yet launched, aims to protect the user's web accounts from compromised renderers (http://www.chromium.org/developers/design-documents/site-isolation). That would rule out this class of attacks by not allowing the attacker to load google.com (or other logged in accounts) or access its cookies, but it's not ready yet-- we're only about to launch the first phase of it (--isolate-extensions). Out-of-process iframes still need more work before they're ready for arbitrary web pages. I don't think special-casing certain Google origins would help here, in addition to the concerns I had about it in comment 5. We wouldn't be able to protect the login cookie, which needs to be available in other processes for things like the +1 button. (Site Isolation will support this case with OOPIFs.) zea@ did clarify to me about OAuth being the standard way Sync uses to track login, and that you're able to get a refresh token from the login cookie and access tokens from there. It seems unfortunate that you can steal a token from a compromised renderer and use it elsewhere without any reauth, but I'm obviously not familiar enough with the details to suggest any changes. The main thing we should try to address in this bug is that compromising an account can let you push extensions to the client. That's elevation of privileges from a normal renderer to an extension process, as jannh@ points out. Assuming Site Isolation helps in the future with the account compromise aspect, we should see whether there's anything we can do to mitigate the extension push in the meantime.
,
Nov 28 2016
,
Nov 29 2016
Quick update: we've chatted with the Google federated login team that manages our OAuth tokens. The current plan is to disallow user approval of the chromesync scope. This will mean that the user will be required to do a full login in order to generate a chromesync scoped token. They're gathering data on whether this will break any services before rolling this out. This is only a partial mitigation, as theoretically an attacker could steal the plaintext password the user using the compromised renderer (either by exploiting the password manager or simply phishing the user with a phony auth page). We're also in parallel discussing possible mitigations on the extensions service side to not enable all synced extensions by default.
,
Nov 29 2016
,
Nov 29 2016
,
Nov 29 2016
It seems to me the root problem is that renderer-initiated navigations don't cause process swaps. I suspect this is not an easy fix given it is a goals of the site isolation effort. As a workaround for now, maybe we could isolate https://accounts.google.com only. According to chrome's process model at http://www.chromium.org/developers/design-documents/process-models, this origin is generally consider part of the same "site" as https://*.google.com. However gaia is not a general subdomain; for example it does not allow other pages to iframe it. We should reach out the gaia team to understand what, if anything, would break if it were not possible to iframe gaia, use postMessage() from *.google.com, etc.
,
Nov 29 2016
Comment 19: Please see comments 5 and 14. We do plan to provide process isolation when Site Isolation is ready, but that's not imminent. We do not intend to process isolate https://accounts.google.com in the meantime.
,
Nov 29 2016
(To explain the severity I assigned in #18) We usually assign high for sandbox escapes requiring a compromised renderer, as in this case. Critical is reserved for full exploit chains or remote -> unsandboxed code execution.
,
Nov 29 2016
Comment 21: Thanks. I'll change to P1 accordingly.
,
Nov 29 2016
Yup, already read those comments carefully :-) As I said in comment #19, I don't expect site isolation to come soon, so was suggesting a possible interim fix that does not involve a registry of URLs. Only one origin needs special casing.
,
Nov 30 2016
,
Dec 1 2016
We are trying to move Chrome to a model where it does *not* special case any origins coming from the web, including the web store. Adding special treatment of accounts.google.com will go against this goal.
,
Dec 2 2016
,
Dec 13 2016
zea: Uh oh! This issue still open and hasn't been updated in the last 14 days. This is a serious vulnerability, and we want to ensure that there's progress. Could you please leave an update with the current status and any potential blockers? If you're not the right owner for this issue, could you please remove yourself as soon as possible or help us find the right one? If the issue is fixed or you can't reproduce it, please close the bug. If you've started working on a fix, please set the status to Started. Thanks for your time! To disable nags, add the Disable-Nags label. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Dec 13 2016
FYI the server side change to prevent user approval has landed is in the process of rolling out. This should mitigate this somewhat, requiring the attacker to either phish the user or trick the password manager into signing in with the GAIA credentials.
,
Dec 16 2016
FYI the change has rolled out. This is now returning a 400 error when you attempt to get the token, saying: 400. That’s an error. Error: invalid_scope Application: Google OAuth 2.0 Playground You can email the developer of this application at: oauthplayground-eng@google.com Some requested scopes cannot be shown: [https://www.googleapis.com/auth/chromesync] On the Sync side that's I think the best thing we can do at the moment. On the extensions side it's possible to continue to look into mitigations for remotely installed extensions, but that's a somewhat orthogonal issue at this point I'd say. Given this now results in an error, marking as fixed.
,
Dec 17 2016
,
Dec 19 2016
,
Dec 20 2016
Your change meets the bar and is auto-approved for M56 (branch: 2924)
,
Dec 20 2016
There's no merge necessary here. This was a server-side change.
,
Mar 25 2017
This bug has been closed for more than 14 weeks. Removing security view restrictions. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by palmer@chromium.org
, Nov 19 2016Components: Platform>Extensions Services>Sync Internals>Sandbox>SiteIsolation
Labels: OS-Chrome OS-Linux OS-Mac OS-Windows