Issue metadata
Sign in to add a comment
|
Security: XSS_via_data_Protocol_Handler
Reported by
wildbugs...@gmail.com,
Jul 12 2016
|
||||||||||||||||||||
Issue descriptionVULNERABILITY DETAILS * Please refer to the attached pdf formatted vulnerability report * In essence, the data: protocol handler can decode its data payload from the Base64 encoded data appended to it. This enables an attacker to store JavaScript payloads in Base64 and thus bypass a plethora of XSS filters and protections. The issue I see here is that the Chrome browser automatically executes any well-formed JavaScript when such a data:URL is navigated to. This likely stems from the fact that the data: protocol handler can set the content type as part of its specification. That is, when such a data:URL string is navigated to by, for example: A. Typing it in the address bar and hitting enter B. Embedded into a href link and the user clicks on it The browser decodes the Base64 data and then automatically executes any well-formed JavaScript. This attack does not work in any other browsers; which supports my claim that Chrome exhibits insecure behavior in these circumstances. Additionally, I am submitting a 'tidbit' of info. for the potential for a similar attack vector in which the data: Protocol Handler could be used to execute Drive-By Downloads Attacks, and potentially Reflected File Download (RFD) Attacks. I submit a means in which an attacker can fully control the contents of the file that automatically gets downloaded; but currently, I have not figured out means to control the name in order to use it maliciously in a RFD attack. But I am working on it and I figure I would give you guys this little shred of info. I am working with in case you can get it to pop or turn it into an attack vector. Cheers, Charles Steeves VERSION Chrome version 51.0.2704.103 Operating System: Windows 10 Pro REPRODUCTION CASE I have documented reproduction steps and evidence of issue. I have also attached a file named Reproduction.html that contains a proof of concept for both issues reported here.
,
Jul 13 2016
,
Jul 13 2016
By definition this cannot be XSS, because data URLs execute in an isolated pseudo-origin, meaning there simply is no cross-site execution. More generally, as dcheng@ stated, this is just how data: URLs work, and it is a very intentional design decision.
,
Jul 16 2016
Hi,
Thanks for the feedback. I will work on finding a more concrete (severe?) case of where Chrome normally prevents the script from running, but does not prevent it when the payload is within a Base64 data: URL. However, I believe I have already demonstrated this with the fact that if/when a user clicks on a link with a basic (detected) cross-site scripting payload, the Chrome browser prevents the JavaScript from executing. But when a user clicks on a link that is a data: URL with base64 encoding, the same JavaScript is executed.
I would like to point out that although I agree that, after thinking about it, the last researcher's comment is correct - it can no be XSS - since in this attack there is no Origin. No other site. An attacker simply needs to get a victim to click on a link. Unless the average user is able to recognize and decode Base64, they would have no idea what the "origin" is (as a user would be able to normally "see" the domain ("origin!") in the URL and make a trust decision based upon it's domain/hostname.
I guess this is not XSS, but a whole new category (Onsite Data Reflected Script????)
Additionally, the first Google researcher inquired into a case "where the XSS filter blocks script in a non-base64 encoded data URL from executing". Please note that Chrome DOES prevent reflected XSS attacks where the url contains something obvious like <scrip>alert(document.cookie)</script>. However, in the case of the data: urls, Chrome DOES NOT prevent a reflected XSS attack - as there is no cross-origin decision that the browser has to make. I am guessing this is where in the operational logic that Google and Chrome miss this.
Cheers,
Charles
,
Jul 16 2016
Hi, After some more thinking, the issue really boils down to the fact that Chrome does not decode 'data: URLs' in the address bar to the perform a Javascript/XSS check upon them; which I guess makes sense, as this is the same as if a page had a javascript: URL. And pages are allowed to have/use such protocol handlers. Thus, it is then up to the site to prevent such injections of protocol handlers...but then why not allow the gopher: or telnet: or dangerous protocol handlers. To be honest, I feel that Chrome should at least decode the URL during a mouse-over on the link, such that the status bar displays the decoded link for the user to preview the URL (as it does with most other URLs. Still banging this one around my head - as a pen-tester I've seen this too many times be able to bypass sites' XSS filters and can embed/store itself into a page. Which again seems to indicate that it should place responsibility on the site to mitigate this. However, Chrome does a lot of additional security watch-dogging to further/better secure it's users, which is why the have removed many RFC-protocol handlers from it's repertoire over the years and why I feel it should decode the data: URL; if not for the browser's own use in defense decision making, then at least so that the user can preview any malicious material. Cheers, Charles
,
Jul 18 2016
Hi, I appologize for harping on this issue. (I swear it's not because I give a hoot about any bug bounty and honestly because I want to help the security posture of a product and a community that I interact with every day.) Following these quotes (excerpts) from Microsoft, Wikipedia and Naked Security, I have outlined the additional Proof of Concept; of which I have also attached to this comment. EXCERPTS ON DATA URI SECURITY & MITIGATIVE EFFORTS BY BROWSERS ================================================================================================== From: https://nakedsecurity.sophos.com/2012/08/31/phishing-without-a-webpage-researcher-reveals-how-a-link-itself-can-be-malicious/ Klevjer points out that Google’s Chrome browser blocks redirection to data URIs, whereas other browsers have set ceilings on the amount of data that can be packed into a URI or URL. IE9 refused to load his sample attack page, which weighed in at 26KB. From: https://msdn.microsoft.com/ko-kr/library/cc848897(v=vs.85).aspx For security reasons, data URIs are restricted to downloaded resources. Data URIs cannot be used for navigation, for scripting, or to populate frame or iframe elements. Additionally: Internet Explorer 8: Microsoft has limited its support to certain "non-navigable" content for security reasons, including concerns that JavaScript embedded in a data URI may not be interpretable by script filters such as those used by web-based email clients. Data URIs must be smaller than 32 KB in Version 8.[3] Data URIs are supported only for the following elements and/or attributes:[4] object (images only) img input type=image link CSS declarations that accept a URL, such as background-image, background, list-style-type, list-style and similar. Internet Explorer 9: Internet Explorer 9 does not have 32KB limitation and supports more elements From: https://en.wikipedia.org/wiki/Data_URI_scheme#cite_note-MSDN-4 The data URI can be utilized by criminals to construct attack pages that attempt to obtain usernames and passwords from unsuspecting web users. It can also be used to get around site cross-scripting restrictions, embedding the attack payload fully inside the address bar, and hosted via URL shortening services rather than needing a full website that is owned by the criminal. In IE 8 and 9 data URIs can only be used for images, but not for navigation or JavaScript generated file downloads.[7] Data URIs make it more difficult for security software to filter content.[9] ADDITIONAL PROOF OF CONCEPT(S) ================================================================================================== I have also prepared another Proof of Concept (PoC) in the form of a couple HTML pages. I had to host these on different web servers in order to demonstrate Cross-Origin/Site requests; thus to reproduce these 'Cases', you will not only need to host them on different web servers, but also edit the file "Google_XSS_via_data_Reproduction_2.html" to contain the appropriate host address/name for the links. Note that I have attached a screen shot of the "Example Webpage Vulnerable to Reflected XSS" after the link in the page "Google_XSS_via_data_Reproduction_2.html" was clicked for your ease of review. Here are the demonstrative cases contained within the sample file(s): Case 1: Chrome Browser Protects Against Reflected XSS When the link below is clicked, the JavaScript payload (an alert pop-up box) is sent in the URL of the request to the vulnerable page; which does not run, as Chrome's built-in XSS Security prevents it. Case 2: Chrome Browser Protects Against Reflected XSS via the data: URI When the link below is clicked, JavaScript payload that is Base64 encoded within the data URL is embedded into the vulnerable page; which does not run, as Chrome's built-in XSS Security prevents it. Case 3: Chrome Browser Does NOT Protects Against XSS via a direct call to the data: URI with Base64 encoding Case 4: Potential RFD 'tidbit' via the data: Protocol Handler Cheers, Charles
,
Jul 19 2016
Just to be clear, this isn't really XSS: script in a data: URL runs in a unique origin in Chrome. So it can't actually access something useful. I think case 3 might be WAI, but +tsepez, who knows much more about the XSS auditor than I do.
,
Oct 20 2016
This bug has been closed for more than 14 weeks. Removing security view restrictions. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot |
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by dcheng@chromium.org
, Jul 13 2016