In issue 268640, we're pursuing "cross-origin read blocking" (CORB; formerly known as cross-site document blocking / XSDB). This tries to prevent sensitive web data from being delivered to a cross-site renderer process, particularly when Site Isolation is enabled. It currently targets HTML, XML, and JSON data, with many exceptions for compatibility.
We're interested in getting data on how effective the current implementation is for protecting real world resources, given how it depends on sniffing and/or a nosniff HTTP response header to be effective without breaking compatibility. This is hard to quantify in general, because it's hard to know from a URL whether it contains user-specific or sensitive data (and thus needs protection) or not.
However, Nick and I were chatting about ways to possibly identify a subset of URLs that seem likely to need such protection, which would let us automatically determine if CORB would protect them and report such values via UMA. This wouldn't cover everything, and it may still make sense to manually audit some popular sites using something like the audit tool in issue 806070, but it might give us a start for understanding whether such resources are commonly protected already.
Nick's ideas included looking for the following headers on URL responses, which might indicate that the server wouldn't want the response delivered to a cross-site page:
- Access-Control-Allow-Origin: (anything but *)
- Vary: Origin
- Pragma: No cache (or similar cache control headers)
We should test this idea to see if these are good predictors of "sensitive" resources with a bit of manual browsing, and if so, add a UMA to check whether CORB would protect such URLs in general.
Comment 1 by palmer@chromium.org
, Feb 25 2018Labels: Security