New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 47416 link

Starred by 1439 users

Comments by non-members will not trigger notification emails to users who starred this issue.

Issue metadata

Status: WontFix
Owner: ----
Closed: Oct 2016
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Feature

  • Only users with EditIssue permission may comment.

Show other hotlists

Hotlists containing this issue:

Sign in to add a comment

Allow a directory tree to be treated as a single origin (loosen file: URL restrictions)

Project Member Reported by, Jun 24 2010

Issue description

The fix for  bug 4197  dramatically improved local file security, but it's generated legitimate complaints from web developers and people distributing local HTML bundles. We have a fix for frame navigation with  issue 39767 , but a lot of people will still be required to use the --allow-file-access-from-files switch, which is not an adequate solution.

So, we should be able to come up with a better way for a developer to mark a prepackaged directory tree as a single origin. Michal had a good suggestion about using a dot file to allow access < > and denying download of dot files. Does this seem like a reasonable measure, or does anyone have better ideas?

Showing comments 9 - 108 of 108 Older
If you can't just copy Firefox, then why not put such information in the manifest file?

Why is denying downloading of . prefixed files helpful?
curiousdannii - Copying Firefox still leaves security holes for folders like Documents and Downloads. As for dot files, we need to prevent a user from unintentionally downloading a file that would expose an entire predictable directory (which is even more important for automatic downloads).

Actually I can see the value in blocking specific . files. You may not want someone downloading a .bashrc file for example. But I don't think that such a restriction should be linked to the access file, nor should it necessarily block all . files.

"Automatic downloads"?

Comment 12 Deleted

Comment 13 by, Jun 28 2010

Sorry guys, Chichester Maritime Ltd. have now officially blocked Google Chrome as a usable 'platform' for the running of their CBT products, which are sold worldwide. They must not allow this to continue to impact their busines or reputation.

I see that this issue has been rumbling on since   Issue 4197  , and probably earlier for all I know. It is not acceptable to expect the user, some of whom hardly know one end of a PC from another, to change a shortcut with the addition of the 'allow-file-access.....' switch. Some of them do not even know which browser they use!! 

Your local file policy is 'over the top' as regards security and I urge you to reconsider, please don't fall into the trap of making your browser so secure that it ceases to be useful or usable. Allow the user to decide as Microsoft do with a simple option choice or, God help me, another yellow bar.

Personally I think this is really a great shame as Google Chrome has many great features, but the only way we can overcome this is a major re-write and by placing thousands of files in one large directory.

If you want to show interest in having this behavior changed please star the issue. As I stated in the previous bugs, please refrain from "me too" posts and other comments that do not contribute to a resolution.

This also affects XHR using both "file:" and "chrome-extension:" protocol. See  issue 41024 .
Just so that we have a range of possible solutions: how about we enable sub-tree directory access for any location other than the default file download folder? This should leave Chromium secure against the attack I was most worried about.

Patches welcome :)
I think that would be an excellent solution.

Most users/developers are probably fine with the Firefox approach (sub-tree directory access) and don't really care what happens in the download folder. Maybe Firefox will copy the download folder restriction, since it is a legitimate security concern. Unified behavior across browsers would be nice :)

Comment 18 by, Jul 4 2010

This is a bit tricky for a couple of reasons; we would need to have special cases for the root directory (people sometimes save stuff there), default download directories of other browsers (a problem if Chrome is the default browser), temp directories (user- and system-wide, HTML often ends up there), "My documents", etc. In any of these cases, it would be rather undesirable for a single malicious document to crawl these directories.

On Windows, we should probably also read Zone.Identifier to make the call, but feeding this into WebKit may be tricky.

I suspect that the .permit_access hack is easier to accommodate; the only downside is that it would rely on non-Chrome browsers never auto-downloading dot files.

Comment 19 by, Jul 4 2010

In the age of JavaScript and desktop-class web applications, what's the fundamental difference between running a .exe file you receive in an email and running a .html?
Html files that people open locally are and will continue to increasingly be applications rather than just content-filled marked-up text.

Does anyone propose that we develop complex sets of rules on what .exe's can do or not do? What OS's currently do is give you a message that this has been downloaded from such and such a place, are you sure you want to open it. Can you imagine the backlash if any OS put the kinds of restrictions that are being talked about here on traditional apps?

The reason there is not a backlash from the general public ("the volume feels low relative to our millions of users userbase") to this change in Chrome is that due to the nascent condition of the market, the general public is clueless about the power of a new age of web apps. We are still in the part of this era where a the majority of users see something like 280slides (which is some years old) or GitHubIssues and they will be surprised rather than par-for-the-course. That doesn't mean these restrictions are not as stifling and unreasonable as restricting what a .exe can do when run locally on a machine.

These restrictions will stifle innovation, and prevent developers from realizing the full potential of web apps. Not to mention they are unnecessary, since by now, any user that is not already cognizant of the dangers of opening attached files is already bound to get infiltrated.

Comment 20 by, Jul 4 2010

meelash: The fundamental difference between EXE and HTML is that web applications already have an imperfect, but useful security compartmentalization mechanism that prevents from reading your mail or messing with your OS. The reason why this mechanism does not extend to file:/// in most browsers appears to just a matter of oversight, and not a conscious design decision.

Even clued users who download HTML files are highly unlikely to expect them to suddenly gain these privileges just because they are loaded via 'save as' rather than 'open'. Therefore, we strongly believe that the dangers of this design far outweigh the benefit of allowing full compatibility with applications that decided to exploit this undocumented and unintentional property to implement legitimate features.

That said, I do think there is a room for improvement with how the mechanism is currently designed, so if you have constructive suggestions, please include them in this bug.

Good grief, @meelash. .exe's and .html are worlds apart. A browser executes .html within a carefully sandboxed environment.

HTML5 is -- thankfully -- opening up system access in a way that maintains the sandbox. You might want to read about HTML5 AppCache, HTML5 LocalStorage / WebDB, HTML5 File API.

These new HTML5 features enable the building of rich and powerful applications. Opening up full system access to .html files is not neccessary. In fact, it would obviously be lunacy.

Comment 22 by, Jul 4 2010

@scarybeasts, keep your pants on, I'm just thinking outside of your carefully sandbox environment, I know it's *dangerous*, but still....

@icamtuf, @scarybeasts, that is not a *fundamental* difference, that is a superficial difference in implementation that has come about mostly by accident. .exe's are also run in an environment, and it is theoretically possible for that to be just as sandboxed and restricted- but look at all the bellyaching by devs over the relatively loose environment of iOS- nowhere near as restricted as the environment web apps are being forced to run in.

The point is, there are people that think web apps will eventually supplant traditional applications. When I look around my desktop, I can't think of a single application I run that doesn't violate one of these security measures that you are forcing on web apps. So you're basically telling those people to go suck it, and the world will never move beyond desktop applications doing desktop-py things, and web apps doing "rich and powerful" things using a little bit of HTML5 local storage space. What you think web apps *should* be doing is all you're going to let them do.

Rather than blocking access, the browser should just build in mechanisms to let us be aware when an app is doing something like accessing something from another domain, or accessing something on our local machine. Let us have the choice to allow it or disallow it, and give developers the chance to build really amazing things that push the boundaries of what you think is possible.

How do you know what kinds of uses developers may move web apps in the future if given the opportunity? You're basically declaring that this is the limit of what you think is possible, and nothing else is worth it.

You're limiting the future of web apps to a small set of current uses, that happen to be satisfying to you as "rich and powerful applications.

Comment 23 by, Jul 4 2010

There is no meaningful, commonly used security compartmentalization mechanism for local executables; and contrary to what you are saying, the same-origin policy is not there "mostly by accident".

I also want to assure you that you would not want to live in a world where visiting an arbitrary website is equivalent to downloading and running an arbitrary executable on your system.

Again, if you see another good way to improve the design (the options discussed earlier aside), you are welcome to contribute. If you simply want us to revert back to the unsafe behavior, I believe this is highly unlikely to happen.

Comment 24 by, Jul 4 2010

@Icamtuf: that's a red herring.
First, the same-origin policy is not being violated by one local file accessing another local file, and that's what you're blocking.

Secondly, I never said visiting an arbitrary website should be equivalent to downloading and running an arbitrary executable. I want to know why downloading and running a web app is different than downloading and running an arbitrary executable. I am referring, for example to the example given in that blog post:

Thirdly, in the future why shouldn't a cloud based web app be capable of doing everything a desktop app can, after the browser warns you and gets your permission? Again, you're imposing your view of how the world should be now, on the future.
Let's agree that there is value in preventing file access from the file protocol. Let's also agree that there are a large number of legitimate usages of the old model. Our goal is to remain more secure than before while not shutting out all of those legitimate usages. Currently the only option is the --allow-file-access-from-files flag which is hardly satisfactory. Other suggestions have been a toggle (checkbox) for an unsafe/developer mode, a same origin policy based on the launched file's directory, and a dot file indicating a directory is safe. The first suggestion locks out non-developer users and compromises the enhanced security on "developer" machines. The second suggestion creates a nettle of platform specific exemptions. The third requires users or distributing developers to be aware and add the dot file and is susceptible to a dot file ending up in a directory via another vector.

My vote is for the second option which, while difficult, best emulates the previous functionality with the least hassle to the users.

Another aspects to this issue is the need for a better error message ( Issue 42481 ). That should have * preceded* such a drastic change in security policy. As it is things just stop working for no apparent reason (amplified by Chrome's auto update).

Comment 26 by, Jul 5 2010

Thank you @meelash and @Airpowersurge - common sense at last. @Airpowersurge, your option 2 is to me the obvious way to go, for what it is worth it has my vote.

Comment 27 by, Jul 5 2010

I think I saw on one of the other threads on the issue some reasons why a pop-up or top-of-the-window warning was not a good solution? Could anyone remind me what those were....

I feel two things are necessary to make warning boxes work:
1. The warning message should have a check box or something to optionally whitelist a trusted url/file so that you wouldn't have to click a message every time. This lowers the reflexive-okay-pressing that comes with too many warnings.
2. Coupled with the one-time warning, there needs to be some kind of constant monitor of suspicious activity accessible, kind of like how the iPhone has an icon in the bar whenever any app is accessing location data. In the browser it could be an unobtrusive icon somewhere that mouseover or clicking revealed a list of all the past file://'s accessed by the page.

(Also, to the devs, please don't confuse my belligerent debate style ;) for ingratitude for the great work you guys do... just trying to make sure I get my point across)
It seems as though a lot of unnecessary fuss is being made over this issue, bottom line is Chrome has stepped over the line as far as local file access security is concerned. XHR no longer works when the file: protocol is used, how can that be a reasonable or sensible security measure, it prevents XHR-dependant browser based apps from working locally in Chrome.

The end-user shouldn't have to do anything to allow these apps to run locally, there's no need for .whatever files, additional exe arguments, changes to the browser settings or anything else. All Chrome needs to do is prevent cross-protocol file access (with the exception of http to/from https), and prevent file: access to certain sensitive local directories. Also, and this seems to be causing the problem, XHR and Chrome Extensions shouldn't use the same security model, that makes no sense whatsoever.

XHR (when running under and using the file: protocol) should have access to (a) the directory in which the HTML file containing the JS file resides, and (b) any sub-directories within that directory. Exceptions: Forbid file access to core operating system directories and user directories. For example...

    Forbid: /windows/*
    Forbid: /My Documents/*
    Forbid: /My Documents/My Downloads/*
    Allow : /My Documents/My Downloads/some-folder/* (if the HTML file is located in this directory)

Alternatively, forbid access to core operating system directories AND require developers to declare the directories they wish to access (within or below the HTML file directory, using relative paths) using <meta> tags within the HTML file...

    <meta name="application-dir" content="src"/>
    <meta name="application-dir" content="data"/>

That would allow XHR access to the ./src directory (and all sub-directories) and the ./data directory (and all sub-directories).

One way or another this issue needs to be resolved sooner rather than later, all it's doing is causing problems, not preventing them.
I filed  issue #42482  (Prompt user to allow file: URLs to read file: URLs) because I think users can make informed decisions in this area.

It makes sense to display a notification bar the first time that a local HTML file tries to access a local file in the same directory tree, and to entirely block access outside the same directory tree.  (Developers would continue to pass the command-line flag to allow unlimited file access.)  The "Downloads" directory would be guarded by the user's confirmation.

I'm also surprised I haven't heard anyone discussing IE's "Mark of the Web" solution.  IE automatically adds the "Mark of the Web" when saving HTML files to disk; it sandboxes the files in the same origin as their mark.  e.g. if I download a page from, the file would gain this HTML comment: <!-- saved from url=(0023) -->  The file would be forbidden access to local files, and would be granted access to URLs on, as if its origin were

More information on Mark of the Web:

Comment 30 by, Jul 12 2010

 Issue 42482  has been merged into this issue.

Comment 31 by, Jul 12 2010

Chrome and Firefox already use "Mark of the Web" to mark any file you download.  I personally think it could be extended to ONLY block such "marked" files from accessing anything, and allow unmarked files (that are on the local computer, files on a network would be treated the same as marked) a little more freedom such as the Firefox "directory tree" model allows.  That would be an acceptable solution to this problem, while still providing the same protection.

Comment 32 by, Jul 12 2010

Hmm sorry didn't read the linked page... I've heard "Mark of the Web" used to refer to downloaded files being marked as from the internet and so if the user tries to open them they get a warning dialog.  You can clear the "mark" from an item's properties with the Unblock button there.

However your suggestion does not address a main problem, that LOCAL files cannot access other local files.  Has nothing to do with content saved from the web.  In my case, my company delivers our app on a CD.

Comment 33 by, Jul 12 2010

There is already a discussion of Zone.Identifier / MotW in this bug, along with several other options. Please do not file additional bugs for this until a particular conclusion is reached here.
@lcamtuf Zone.Identifier is different from MotW; MotW was not mentioned in this thread.  (Honest!  I read the thread!)

You'll also note that I filed  issue #42482  (Prompt user to allow file: URLs to read file: URLs) BEFORE this bug was opened, and I believe it's a separate issue from the current  issue #47416  (Allow a directory tree to be treated as a single origin); please unmerge it.

Regardless of what decision we make here for  issue #47416 , the user should be notified and given the opportunity to allow dangerous behavior.
Is there consensus around this solution?

* Allow a directory tree to be treated as a single origin, excluding the Downloads folder.
* Prompt the user with a notification bar to allow access to the Downloads folder.
@DanFabulich That sounds satisfactory

MotW solves a different problem: access of web data from local files. It also seems like a flimsy and easily subvert-able implementation if it simply depends on a meta comment.

Comment 37 by, Jul 12 2010

@DanFabulich Good to me also, it's what I've been asking for. I dread to think how many apps around the world have been broken by this!

Comment 38 by, Jul 12 2010

@DanFabulich, sounds fine to me. The directory tree solution is enough for my needs.
Sounds like a good solution to me too. I work for a help authoring tool vendor burned by this issue as we read some .xml config files which are in the same directory - the solution would fix the problem for us and all our customers.
Thank you for your contributions, but I should point out that we are extremely unlikely to implement or accept a patch that does any of the following:

1) Allowing local access to any sensitive locations (eg. Documents root, Downloads, the Windows tree, and Volume roots on MS). Every OS has locations that are inherently dangerous to allow untrusted access to, and we are unlikely to loosen any restrictions on these locations.

2) Granting local access based on a warning message. Such warnings are poor UI for numerous reasons, not the least of which being that users routinely click through without any consideration.

3) Granting local access based on any directive contained in the HTML file. Such an approach completely defeats the point of additional local file security and amounts to implicitly trusting a potentially malicious third party.

4) Implementing Microsoft's "Mark of the Web" HTML comment. This method does nothing to address manipulation of non-HTML resources (via XHR, etc.), and frankly adds numerous additional security risks along the lines of #3.

Those 4 sound entirely reasonable to me and we can totally buy into the idea of increased file security so long as there is *some way* to do local file XHR. Avoiding sensitive locations is a simple, understandable and practical thing to require. Having to re-write all local data using JSON (the only other "solution" I've seen for this problem other than a command line switch) is not. So we're all for any change that will open this up again, and would welcome sensible restrictions. Would still be good IMO in the restricted scenarios to give the user some kind of feedback that a security restriction had prevented them viewing the content they opened - so that they at least know what is going on.

#2 is already used in Chrome for dangerous websites (the red screen of death).  If this is ok for these websites, why isn't it ok for local files?  I suppose the evil websites require an additional interaction to be dangerous.

What about a patch that looks like the popup blocker?
@JustinBMeyer - There are several reasons why this is not a good idea, but here are two big ones. First, the red interstitial is used for pages that most likely represent a serious risk to the user (bad SSL, SafeBrowsing alert, etc). For the average user, this should always be interpreted as stop sign. As such, using the same kind of warning for local file URLs would be mis-representative (particularly since people here are arguing for their legitimate uses), and would condition the user to take this warning less seriously.

The converse applies to the popup blocker warning. Popups are certainly a nuisance, but without exploiting some other vulnerability they do not represent a security risk. Whereas local file access can present a serious risk, and an end user should not conflate that as being equivalent to a nuisance popup. 

The second issue is a more general quirk of using prompts. The problem is that prompts become increasingly less effective the more often they're used. That's why the Chromium team tries to use prompts very sparingly, and invests a lot of effort in avoiding or reducing prompts whenever possible.

@jschuh OK, that means that the consensus solution is:

Allow a directory tree to be treated as a single origin, excluding a blacklist of directories.  The blacklist would exclude, at a minimum, the default Downloads folder.

Warning messages aren't a black and white issue, so we should table that discussion (and unmerge  issue #42482 ) and implement the consensus solution.

Similarly, we can disagree about what should be on the blacklist without disagreeing that there should be a blacklist.

Does anyone disagree with this statement of consensus?  Based on my reading of the thread, I think it has unanimous support.
If XHR only allows read-only access to files, and it prevents data from being uploaded/downloaded if the HTML page is running under the file: protocol, I don't see what security issues could arise from that. Even if someone does manage to access a sensitive file they won't be able to send the file data anywhere, and they won't be able to retrieve locally stored data (local storage) from an external location.

Allow XHR to access the directory tree in which the HTML file resides, and forbid everything except read-only file access, end of problem.
 Issue 30834  has been merged into this issue.
 Issue 30834  seems unrelated. "a LOCAL file *SHOULD* be allowed to fetch data from external sites like in an extension" isn't the same issue as a local file reading other local files and the proposed solutions (except the flimsy mark of the web) don't deal with that action just local reads. Interactions with external sites is a whole 'nother can of worms.

Comment 48 by, Jul 13 2010

@DanFabulich - your comment 44 - Absolutly agree - 100%

Comment 49 Deleted

Comment 50 Deleted

I work for a company that creates a help authoring tool that generates WebHelp for both local and online viewing. Despite their failings, frames are still the least expensive solution for displaying a linked table of contents and the documents it links to when there are potentially thousands of documents involved. This issue is currently making Chrome unusable for WebHelp generated by many help authoring tools, not just ours. 

It may look as though there is not much response on this issue but that is only because the very large number of end users really affected by this will never visit a bug list like this -- they don't even know what a bug list is. They will simply see that Chrome doesn't work and they will switch to a different browser. 

No solution calling for MOTW (Mark of the Web) or starting the browser with a command line option is viable. In Internet Explorer MOTW disables links to files and causes other problems, if it is active WebHelp is not functional, so it can never be used in production WebHelp. Command line options are not an option because this is not something that you can reasonably expect normal users to implement. They will not understand it and they will not do it.

In the current version of Chrome the problem appears to have changed but not really improved: Links in one frame no longer open a file targeting the second frame in a new tab, but legitimate cross-frame calls are still blocked, making WebHelp dysfunctional. Here is a simple jQuery test for the issue:


If you place this in a file loaded into one of the frames it will return undefined in Chrome in a frameset loaded locally and the cols value in any other browser or in Chrome when the frameset is loaded from a web server. 

We don't want to do it, but if the current situation doesn't change we are going to be forced to test for the bug and display an automatic alert informing users that they need to switch to a different browser because a Chrome bug is preventing display of local WebHelp. Even though this may have been implemented as a security feature the unwanted side-effects are clearly a bug. 
@ DanFabulich: "Allow a directory tree to be treated as a single origin, excluding a blacklist of directories.  The blacklist would exclude, at a minimum, the default Downloads folder."

OK, but what exactly do you understand under "a directory tree" here, in relation to the frameset being opened? At least for WebHelp systems, the simplest solution would be to treat the folder containing the frameset and any subfolders to be a single origin, but nothing else. That would be transparent to the user and would avoid jumping through a lot of unnecessary hoops:

It would automatically distinguish between online and local files without tests because any local files couldn't possibly be in the same folder. It would also *almost* eliminate the need for a blacklist -- the only exception I can think of offhand would be a frameset maliciously saved in /Program Files or /System32, but by the time that has happened you would actually already have serious problems anyway.

Comment 53 by, Jul 13 2010

@helpmanpro - exactly, I totally agree with all you say. It would be interesting to see Google's reaction in Microsoft changed their OS in a way that prevented Chrome from running!!

We have been blocking Chrome from running our CBT packages since version 5 was issued because of this and also  issue 39767 . According to our stats for the last 6 months Chrome only accounts for 4% of the hits on our sales website so it may not be as bad as you think assuming that your userbase has a similar profile as ours. Go ahead and block it otherwise you will have disappointed customers and that will not do your company's credibility any good.

The company I work for has written to Google HQ about this issue so hopefully we shall see some action soon instead of endless discussion. 

I'm sure your comment and mine will illicit another "please refrain from "me too" posts.........." comment ;-)

 Issue 44400  has been merged into this issue.
@si.nutrox @helpmanpro At this point, the Chromium team is just looking for consensus; we should table any discussion that detracts from consensus if we actually want this bug fixed.  If you even think my suggestion in comment 44 is an improvement, then let's start there.

Comment 56 by, Jul 13 2010

As per Justin's message, I actually do not believe we are simply looking for consensus:

"As I stated in the previous bugs, please refrain from "me too" posts and other comments that do not contribute to a resolution."

We are looking for two other things:

1) We want to see how many people care about this: "if you want to show interest in having this behavior changed please star the issue."

2) We want to solicit ideas for how to improve the mechanism, the methods discussed earlier aside; there is no value in repeating the already proposed points, however.

The options proposed earlier are:

1) Keeping the mechanism as-is. We always recognized that a small number of applications relies on the (dangerous) status quo, and we provided an override mechanism for these cases. We do realize that this mechanism has some usability downsides explained in comment #2 - most notably, the need to completely close the browser to toggle between modes, and the inability to narrow the permissions down to a specific directory.

2) Changing the startup option to be something like --enable-local-js=/dir. This shares one of the downsides with option 1, but eliminates the other.

3) Adding a dot-file override mechanism at directory level. This is a very clean and easy-to-use solution; the only concern is that we need to prevent dot-files from being dropped through other browsers. Since Chrome already has a check, and other browsers generally do not auto-download, the risk here may be acceptable.

4) Allowing Firefox-style directory access everywhere except for certain blacklisted directories. Defining this set will be nearly impossible, especially when it comes to cross-browser interactions. In addition, normal non-security variations in user habits may interfere with this mechanism in unexpected ways. Therefore, I think this option is least favored. On Windows, it may be further improved by checking Zone.Identifier tags on the loaded document, but this is not portable.

Consequently, the following options are most likely *not* acceptable:

1) Getting rid of the mechanism; to a vast majority of users, the security benefits far outweigh the inconvenience in these cases. See comment #20.

2) MSIE-style security prompts. See comment #43. Most users click through all warnings, and to those who don't, I don't even see a way to explain the consequences of clicking "allow" in a concise manner.

3) Using Zone.Identifier or MotW tags to restrict file:/// access alone - as this is not portable, and not even supported by all browsers and other user agents that download files on a typical system. Also note that MotW tags should *never* be used to allow access to the originating domain, and are not used this way in MSIE.

4) Adding an obscure and highly use-specific options to browser configuration. The proliferation of obscure options led to the difficulty of managing the configuration for MSIE, and I think the overall philosophy for Chrome is to avoid such clutter. 

None of the comments in this thread changed this picture so far - so the discussion is a bit counterproductive, so I suspect this actually reduces the likelihood of a speedy, desirable outcome.

I think a few people have agreed that #4 keeps the security without making it a PITA for users and web developers it also supports most of the historic uses of the functionality. Yes maintaining a blacklist is difficult but not that difficult. Only the most obvious locations should be safeguarded (Downloads, Win32, User's home directory). Your comment that "this option is least favored" is way off at least from my perspective. Your acceptable options lean very heavily towards user configured/enabled options. I'd like a solution that doesn't require me (and users in general) to actively anticipate the problem. Particularly since the error message wasn't particularly articulate.

Take for example the Chrome user who has a cd with a documentation browsing site using local iframes and xhr. Currently it breaks and the fix is on the user or the developer (but in big companies will that ever happen). So the user has to find out 1) why it isn't working 2) what a flag or dotfile is 3) how to add/create either.

With #4 it just works. And at the same time a page that tries to read the users disk is foiled.

As a stopgap we could implement single origin as directories leave off the blacklist. It's a step back from the bolted down security of the last patch but a fair middle ground for usability and security (the previous model treated the entire local machine as single origin so this IS better). Then each entry added to the blacklist is a tightening of the bolts. 
In that case I vote for #3 - Adding a dot-file override mechanism at directory level.

That sounds like the best solution if Firefox/Safari/IE style XHR directory access is going to continue to be prevented in Chrome.

Comment 59 by, Jul 13 2010

Again, this is *not* a vote. 

If you have any other options to add, or any new, important considerations for any of them, please update this thread accordingly.

Otherwise, we will probably pick between #1, #2, or #3, depending on the number of people who starred this issue or otherwise seem affected. We already know you care.

100% agree with @Airpowersurge in comment #57, specially:

"As a stopgap we could implement single origin as directories leave off the blacklist. It's a step back from the bolted down security of the last patch but a fair middle ground for usability and security (the previous model treated the entire local machine as single origin so this IS better). Then each entry added to the blacklist is a tightening of the bolts."

I wanted to make the point that you are making some assumptions about how many content scenarios and users this problem affects. Bear in mind that the user gets *no* warning or indication about the cause of their broken content and I would bet that most users would simply switch to another browser to view the content (probably assuming that a Chrome bug is to blame) - I doubt very much that many users would know how to bring up the javascript console, let alone how to make sense of the errors displayed there. Personally I've lost count of the number of times I've used local html content that may or may not have been affected by this issue. Solution #4 will fix the mass of content already out there that has been suddenly crippled by this issue with no warning or error and it seems to me that it is a sensible balance between ultimate security and practicality for a feature that is widely used for a specific but important scenario. If you don't fix this in a way that "just works", users will simply use another browser for that content I guess - and content developers will eventually have to re-implement their solutions using JSON or some other mechanism for no good reason other than to accomodate chrome. That would be a real shame IMO and would create unnecessary work for no perceivable end user benefit. I guess this is an interesting test of how the Chrome dev team listens to feedback from web developers as well as "regular" end users and is able to judge a balance between maximum security for every edge case against crippling existing content. I also agree with @Airpowersurge in comment #57 - "It's a step back from the bolted down security of the last patch but a fair middle ground for usability and security (the previous model treated the entire local machine as single origin so this IS better). Then each entry added to the blacklist is a tightening of the bolts."
Just to reiterate lcamtuf's point here, please make your opinions and proposals known. However repetitive commentary and outbursts just make it harder for us to get an accurate picture. So, ideally, you should just be starring the issue unless you're proposing a new alternative or providing information not yet considered.

Regardless what opinions are offered, this decision is not up for a vote. We want your input, but the fact is that the security of Chromium is our responsibility. And we have to make the best compromise based on our own expertise, which may not align with the interests of the people commenting on this bug. 

I know the issue isn't up for vote but as users affected by the change we are trying to make the case that there is a win-win scenario where you guys don't have to screw us in the name of security. I've starred enough issues to know what useless 'me too' comments look like. However there is a difference when options have been proposed and people are offering feedback on which options seem palatable or useful. Starring the issue indicates interest but not what resolution would help that user.

I think people are getting heated over this (on both sides) and it needs to cool down. Chromium is a great browser, so thank you devs for making it that way. I'll be sorry to switch if it comes to that but you're right: not being involved in the project it's not my call how you implement security.
Have you considered perhaps allowing just a restricted set of allowed file XHR extensions, perhaps together with the "directory tree single origin" solution? In our scenario, and other similar content scenarios, it's just web files we need to access in the same directory tree - in our case it's .xml files but I'm guessing other users would likely be accessing .html and .js files. In any case it would be a small subset of text based file types. Seems to me that this would signficantly pin down the "scanning files in the current directory" possibility whilst fixing up the broken current scenario for existing content.

Comment 65 by, Jul 13 2010

@jschuh, if the decision is not up for vote, then please let us in on the decision making process? Tell us who's in charge so we can bombard their email box instead of seemingly annoying you.

I find it a little ironic that when the issue was first brought up, a dev made a comment about the volume of complaints being low. Besides the other reasons for this which have already been mentioned, when people do start complaining, then they get talked down to for making "me-too" comments.

There seems to be a bit of arrogance about this "critical" security issue, given that no one has even proposed a common attack vector example. The one given in the blog post on the issue involved access to an outside server from a file:// and not simply read only access to other file:// on the local machine.

It's silly this is being treated like the next big security hole, when major browsers are well known to be not blocking this, and no one has made any attempt at an exploit.

Bragging rights?

Comment 66 Deleted

Labels: Restrict-AddIssueComment-Commit
Please star this issue if it's relevant to you.  The discussion here was effectively lingering.  As a general practice we are going to start using the Restrict-AddIssueComment-Commit label more on capping long threads.  The intent being not to hamper discussion but to constrain limit the discussion so that the content of the bugs is actionable (once a bug gets to be over 50+ comments, it becomes very hard to parse out useful information).  Many thanks for understanding.
Summary: Allow a directory tree to be treated as a single origin (loosen file: URL restrictions)
 Issue 45970  has been merged into this issue.
 Issue 39561  has been merged into this issue.
 Issue 38702  has been merged into this issue.
 Issue 39837  has been merged into this issue.
 Issue 42481  has been merged into this issue.
 Issue 46438  has been merged into this issue.
 Issue 41643  has been merged into this issue.
Another user running into this issue:

Description From 2010-07-27 16:38:10 PST (-) [reply]
On loading a local xml file (file://.../test.xml), a refernced xsl stylesheet can not be loaded.

Error on JavaScript Console: "Unsafe attempt to load URL file:///media/Datengrab/test.xsl from frame with URL file:///media/Datengrab/test.xml. Domains, protocols and ports must match."

If it's loaded from a web server the xsl file is loaded and transformation takes place correctly:

Steps to Reproduce:
Download attached files test.xml and test.xsl and save in same directory.
Load test.xml in Chromium (file://.../test.xml)

Expected Results:
Loading the referenced xsl file and transform the xml data to HTML.

 Issue 55900  has been merged into this issue.
From a UI team perspective, the dot file solution is not great.  Users already have no idea why we mark .html files as "dangerous".  Marking dot files as dangerous is even more cryptic since Windows users will look at them as some sort of weird nameless file with a strange extension.

The dangerous downloads prompt sucks as a "make an informed security decision" prompt.  The value it has is in preventing unexpected drive-by downloads -- which is why on other bugs we intend to change the prompting algorithm so it's focused more on whether the download was triggered directly by a user action or not.

On comment 56, I view option 2 as equally unsatisfying to most complaining users as option 1.  I'm pretty strongly in favor of option 4.  I don't think we should worry about solving every last edge case with it; simply blocking our default download directory seems "good enough" to me.  After all, if we're worrying about cross-browser attacks involving Firefox, can't an attacker just use Firefox' more permissive model to begin with and be done with it?  Perhaps I'm not fully understanding the tradeoff.

In any case, I wanted to speak strongly against the :dot file" solution.
From a UI team perspective, the dot file solution is not great.  Users already have no idea why we mark .html files as "dangerous".  Marking dot files as dangerous is even more cryptic since Windows users will look at them as some sort of weird nameless file with a strange extension.

The dangerous downloads prompt sucks as a "make an informed security decision" prompt.  The value it has is in preventing unexpected drive-by downloads -- which is why on other bugs we intend to change the prompting algorithm so it's focused more on whether the download was triggered directly by a user action or not.

On comment 56, I view option 2 as equally unsatisfying to most complaining users as option 1.  I'm pretty strongly in favor of option 4.  I don't think we should worry about solving every last edge case with it; simply blocking our default download directory seems "good enough" to me.  After all, if we're worrying about cross-browser attacks involving Firefox, can't an attacker just use Firefox' more permissive model to begin with and be done with it?  Perhaps I'm not fully understanding the tradeoff.

In any case, I wanted to speak strongly against the :dot file" solution.
 Issue 72714  has been merged into this issue.
 Issue 79799  has been merged into this issue.
 Issue 70345  has been merged into this issue.
 Issue 46236  has been merged into this issue.
See  issue 93865  for an additional comment

Comment 86 by, Sep 23 2011

 Issue 39561  has been merged into this issue.
Issue 101719 has been merged into this issue.

Comment 89 by, Jun 25 2012

 Issue 124809  has been merged into this issue.
 Issue 134914  has an additional comment.
 Issue 121406  has been merged into this issue.
Project Member

Comment 92 by, Mar 10 2013

Labels: -Area-Internals Cr-Internals

Comment 93 by, Mar 13 2013

Labels: -Restrict-AddIssueComment-Commit Restrict-AddIssueComment-EditIssue

Comment 94 by, Mar 20 2013

 Issue 93865  has been merged into this issue.
 Issue 287470  has been merged into this issue.
 Issue 355225  has been merged into this issue.
I just saw VMWare installing help files under program files and then using the user agent to ask for another browser while pointing to this bug.

That's clearly sad.

From comment #56, how about doing something close to #4 but using a whitelist instead of fighting against a blacklist? Even if the only item on that list is "program files", that will be an improvement with no security loss.

It doesn't cover all cases, but it's better than nothing. To be totally clear, what I mean is allowing access to the directory tree as long as it lives in a path that _should_ not be writable without admin access.
Labels: Cr-Security Cr-Blink
It shouldn't be very hard to fix this. I think the lack of activity is more that the ownership is very ambiguous and no one has considered it a big enough problem to jump in and provide a CL. Maybe @cevans (as the original author) feels like doing a public service and resolving this? Beyond that, I've altered the flags to keep this on more people's radar.
 Issue 115811  has been merged into this issue.
Labels: -Cr-Blink Cr-Blink-SecurityFeature
 Issue 333456  has been merged into this issue.
 Issue 645096  has been merged into this issue.
Note that this is currently the #2 top starred blink bug.  Any progress?
No progress update here, but I've filed hundreds of Chrome bugs and I would rather would see this fixed than any of them – the current restrictions cause frustrating and seemingly arbitrary restrictions for flat-file HTML development.

How feasible is a whitelist solution here?
Assuming whitelists would satisfy most use cases, is there a whitelist format we can piggyback on? Application cache manifest files? Chrome extension manifest files?
Status: WontFix (was: Available)
The original owner of this behavior is long gone, and the Web security landscape has fundamentally changed since Chrome started restricting file: origins (back in 2010). So, I'm closing this out WontFix, since I don't foresee us prioritizing it and leaving it open and ignored sends a confusing message.
 Issue 41024  has been merged into this issue.
Showing comments 9 - 108 of 108 Older

Sign in to add a comment