|AmbientLightSensor API and devicelight events allow cross-origin information leaks|
|Reported by lukasz....@gmail.com, Apr 19||Back to list|
UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Firefox/52.0 Steps to reproduce the problem: I’d like to bring to your attention the fact that the feature allowing websites to access the light level reported by a device using either the old-style devicelight event or the new Ambient Light Sensor API allows information leaks across origins. Specifically, it allows the detection of the screen color which leads “pixel-perfect” attacks (similar to https://www.contextis.com/documents/2/Browser_Timing_Attacks.pdf but without the timing vector). Specifically an attacker can steal the contents of cross-origin images or frames and detect the color of links, allowing her to determine if a link has been visited by the user or not. The attack is not affected by the precision of the light sensor readout (at least as long as there is sufficient precision to distinguish a white vs. black screen) or the supported readout frequency. Chrome currently supports the AmbientLightSensor and devicelight APIs under the “Generic Sensor” and “Experimental Web Platform Features” flags respectively. The issue is described and demonstrated here: https://blog.lukaszolejnik.com/stealing-sensitive-browser-data-with-the-w3c-ambient-light-sensor-api/ What is the expected behavior? What went wrong? tl;dr Please consider requiring browser permissions for access to light sensor readings. Did this work before? N/A Chrome version: <Copy from: 'about:version'> Channel: n/a OS Version: OS X 10.12 Flash Version:
As described, the proof of concept does not sound like an attack that real-world attackers are going to prioritize. The very slow exfiltration rate (1 bit per 500 ms), and the dependence on external factors (like screen angle and brightness) work to significantly reduce the risk. Nor do I think a permission is warranted to resolve the remaining risk. I'm pushing for very low sample rate and precision, along the lines of 1 to 4 Hz and 2 to 4 bits of precision, because so far that seems like enough to support the use cases. It sounds like 4 Hz/4 bits is likely to happen. +owencm? We could also consider making the API available only to the top-level frame, and/or only available to the currently focused tab. Those mechanisms would also reduce or eliminate the risk, while supporting the use-cases I've heard of so far. Generally, permission prompts are a last-resort defense mechanism, because they have bad second-order effects on the overall quality of the user experience (including negative security outcomes) and because part of their effect is to shift perceived blame from UA and application developers to the people who use them, which is backwards. And, generally, attackers have higher-reliability, higher-bandwidth means of attack than this, and we are much more concerned about those.
FWIW, on Chrome OS with the right about:flags flags set, https://arturjanc.com/ls/demo.html?demo=history just says "Calibrating..." forever. (I've waited more than 8 minutes so far.) Does the attack scenario require that the user watch a flashing screen for 8 minutes?
Chris, sorry, the PoC is likely broken on Chrome OS because we didn't have a device to test it on -- we focused on Android & Macbook Pro for the demos. Calibration should take 1-2 seconds (as shown in the videos at https://arturjanc.com/ls/) so I wouldn't necessarily use "I waited a long time and saw nothing" as an argument against the feasibility of the attack ;-) More broadly, the fact that an attack doesn't *always* work likely doesn't mean that it *never* works; one of the outlined scenarios (user leaves phone on a shelf and goes to bed or leaves the room for a few minutes) probably isn't outrageously contrived. Similarly, the existence of other vectors for cross-origin data exfiltration shouldn't be a reason to introduce new mechanisms that will undermine eventual fixes to the attacks that we're more worried about. Here are a few thoughts about the mitigations you suggest: - Limiting to the focused tab or top-level frame doesn't affect this attack because here the attacking website is top-level and in focus (it needs this to control what's displayed on the screen). - Reducing the sample rate is at most a minor annoyance for an attacker -- if you can reliably get 1 bit per 500ms even 1Hz won't slow you down much. - Quantization can help, but I'm worried that 4 bits is too much granularity. For example, I've seen ~40 lux difference between black/white screen readings, which might still be detectable at 4 bits. We should be alright at 2 bits so I'd recommend that as the main, if not only, mitigation. I'd rather not get into a philosophical discussion about permissions here, especially if we can quantize to reduce granularity sufficiently to fix attacks like this one. WDYT?
To be clear, even if the PoC worked perfectly, I would still consider this a Security_Impact-None or at most Security_Impact-Low bug. By comparison, consider more reliable, higher-bandwidth/-throughput history leaking attacks: https://bugs.chromium.org/p/chromium/issues/list?can=1&q=reporter%3Ajenuis&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids I do think that limiting API access affords good uses of the API and dis-affords bad uses. Basically I do think the attack scenario is, if not *outrageously* contrived :) , at least marginal (relative to all the other attacks we are coping with). 2 bits of precision seems good to me, and I suggested it in https://docs.google.com/document/d/1Ml65ZdW5AgIsZTszk4mD_ohr40pcrdVFOIf0ZtWxDv0/edit#heading=h.gjdgxs in a comment (where they considered reducing precision to 10 lux). I'm still pressing for real use-cases (https://github.com/w3c/ambient-light/issues/23), which must drive the outcomes here; so far, I haven't been overwhelmed with compelling use cases that require anything more than https://developer.mozilla.org/en-US/docs/Web/CSS/@media/light-level would already enable. So, for the time being, I'm content to block on use-cases. That's got to come first.
FWIW I consider leaking :visited styles as by far the *least* interesting part of this. I'm curious why you're not more worried about allowing attackers to read arbitrary pixels from cross-origin frames/images -- it seems like a potentially much more dangerous bypass of SOP restrictions.
#6: I am mildly concerned, but the low throughput of the leak means it takes even longer to get a result (e.g. "64x64 pixel image: 34 minutes 8 seconds"). The facts that the preconditions for a successful attack are complex, that reliability is low (I can't get the demos to work on Chrome 57 with a Nexus 5X either), and that throughput is low all add up to low exploitability and low severity of exploitation. And that's before we get into the sample precision issue.
palmer@: I'm explicitly assigning to you since you're leading the charge to figure out safe rate limits.
Making this bug public because it's really about deciding what policy to set, before fully shipping the feature. We should discuss that kind of thing in the open, IMO.
Adding allpublic to actually make this public. There's a rule in monorail which auto-applies Restrict-View-SecurityTeam for issues with Type-Bug-Security.
|► Sign in to add a comment|