Project: chromium Issues People Development process History Sign in
New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Issue 378566 Block sub-resource loads from the web to private networks and localhost
Starred by 129 users Project Member Reported by, May 28 2014 Back to list
Status: Assigned
EstimatedDays: ----
NextAction: 2017-04-01
OS: All
Pri: 2
Type: Bug

Blocked on:
issue 590714

issue 458362

  • Only users with EditIssue permission may comment.

Show other hotlists

Hotlists containing this issue:

Sign in to add a comment
This is a hardening measure to block certain brute force and XSRF attacks attacks against internal network devices and processes listening on localhost. In conjunction with the change, we will need a command line switch and enterprise policy to revert to the older behavior for testing and legacy applications.
Justin: Could you elaborate a bit more on the policy for determining what addresses to block? I only ask because the question of "What is an internal IP" has come up several times recently, with different answers

That is, there's RFC1918 - 10/8, 172.16/12, 192.168/16 - or there's the entire IANA reserved space, as embodied by and as listed on /
Comment 2 by, May 29 2014
Comment 3 by, May 29 2014
I was assuming RFC 1918, but hadn't really thought it out any further.
Comment 4 by, May 29 2014
There are two interesting corner cases:

1) Proxies, in which case, the browser may be not even doing the IP lookup. MSIE tries to infer stuff from PACs here, sometimes with dangerous results. We probably shouldn't.

2) Stuff in public address spaces to which you still have privileged access. It's already fairly common for IPv4, and not needing private addressing for your internal stuff is one of the goals for IPv6. Probably the best we could do is treating local network (as defined by interface IP & netmask) as, well, local. That won't be very exhaustive.

Is the plan to apply this to everything?

WebSocket has
- the origin header to the check the origin of the connection
- and Sec-WebSocket-Key/Accept not to connect to non WebSocket server
- As required in the spec, we don't tell the script what happened in connection attempt in case handshake fails not to tell any information for malicious scripts attempting port scanning.

Don't these address most of concerns discussed here?
Comment 6 by, Jun 2 2014
Opera Presto does this:
There are three levels, internet, intranet (as defined by IP ranges, both IPv4 and IPv6 have defined data ranges for intranets/local space), and localhost. An non-user initiated connection from an outer level to an inner level is blocked by default. The block results in an interstitial, where the user can choose to allow an override once, or allow an override to the target host permanently.
If the browser is using a proxy, the protection is turned off. The protection can also be turned off with a global setting.
The protection holds for all kinds of requests, including e.g. OCSP/CRL. Such background requests (as well as many inlines) will be blocked invisibly for the user, so to enable them to pass through, one would have to get the same redirection to happen in the main document, and then enable the override permanently from the interstitial.
There was some fallout, e.g. BBC owns a large set of IPv4 public address space, which they use(d) as intranet. When combined with a PAC proxy balancer, some internal servers switched forth and back between public address space and private address space. Some TV app developers also complained, they wanted to be able to redirect directly from their central server to the TVs internal address space.
Comment 7 by, Jun 6 2014
We could do a naieve (e.g. no interstitial) implementation of this by extending MixedContentChecker to understand the private/public distinction, and by teaching SecurityOrigin the same.
Comment 8 by, Jul 29 2014
Hadn't seen the last update from mkwst@. Seems like a really good idea.
Comment 9 by, Jul 30 2014
Labels: Cr-Blink-SecurityFeature
Status: Assigned
I'll poke at this. Goes right along with the rest of the mixed content cleanup I'm doing.
#6: Do you allow internet to localhost (it sounds like no). I believe spotify's web player works this way. Not sure if there are other cases like that. Overall this seems like a good tightening measure though.
Comment 11 by, Jul 31 2014
Internet to localhost was blocked by default. Attempts would load an interstitial, which would include a link to permanently allow redirection to that host, so there would be an extra manual browser configuration step for Spotify users.
Labels: Cr-Blink-WebSockets Cr-Blink-XHR
Project Member Comment 13 by, Aug 7 2014
The following revision refers to this bug:

r179719 | | 2014-08-07T16:18:16.808969Z

Changed paths:

Add platform methods to check reserved IP addresses.

This patch adds two methods to Platform which will allow us to improve
mixed content checking by determining whether SecurityOrigins and URLs
point to IP addresses that are reserved (e.g.,


Review URL:
Project Member Comment 15 by, Aug 8 2014
The following revision refers to this bug:

commit f0503a1f9bf67adf9edbf2164322e9d5f0e104d6
Author: <>
Date: Fri Aug 08 11:04:37 2014

Wire 'blink::Platform::isHostnameReservedIPAddress()' to 'net::IsReservedIPAddress()'.

This is a fairly straightforward patch which wires up the Blink platform
API over to the network stack.


Review URL:

Cr-Commit-Position: refs/heads/master@{#288327}
git-svn-id: svn:// 0039d316-1c4b-4281-b951-d872f2087c98

I am uncomfortable with this approach. It will break existing legitimate apps, while malicious attackers can just set up DNS aliases or use existing ones like

To provide meaningful protection, the check needs to be applied after the DNS lookup, which implies associating internal/external flags with requests and responses.
Long term, I think you're right. I see this approach as a first pass[1] to help us evaluate what breaks. Chris' suggestion[2] to push things up to the network stack makes sense to me, but wasn't met with universal acclaim. *shrug*

I'd invite you to hop onto that net-dev thread to weigh in. I'll do more or less whatever folks who know things about networking tell me to do. :)

Project Member Comment 18 by, Sep 8 2014
The following revision refers to this bug:

r181557 | | 2014-09-08T13:42:41.661937Z

Changed paths:

Mixed Content: Start measuring the impact of blocking private IPs.

Before we know whether or not we can implement [1], let's find out how
hard it will hit the portions of the web for which we can gather UMA


Review URL:
Project Member Comment 19 by, Sep 16 2014
The following revision refers to this bug:

r182053 | | 2014-09-16T14:56:26.072709Z

Changed paths:

Measure resources from private IP addresses in public websites.

This patch gathers data in the hopes of measuring the impact of locking
down private-IP-address subresources in public-IP-address sites. It does
so after the response has been delivered, so it's not the way we'd want
to do checking if we wanted to block the request, but it's a good way of
getting numbers so we can make decisions.


Review URL:
Project Member Comment 20 by, Jan 13 2015
The following revision refers to this bug:

r188309 | | 2015-01-13T18:09:08.529138Z

Changed paths:

Tag SecurityContext objects as being hosted in reserved IP ranges. [1/2]

This will allow us to tag requests from those contexts in order to allow
the network stack to make intelligent decisions about whether or not it
should allow requests to reserved IP ranges go through, or whether they
should be blocked in the presence of an as-yet-to-be-introduced flag.

This patch simply adds a boolean to SecurityContext and uses it to do
the measuring that we're currently doing, rather than recalculating the
Document's state for every request.

Patch 1 - Blink:    [THIS PATCH]
Patch 2 - Chromium:


Review URL:
How would this fix affect NaCL chrome apps that have legitimate reasons to access localhost?
Project Member Comment 22 by, Jan 14 2015
The following revision refers to this bug:

commit 56679603e63fcce35ffbb794795f500ed842e7cc
Author: mkwst <>
Date: Wed Jan 14 14:06:23 2015

Tag SecurityContext objects as being hosted in reserved IP ranges. [2/2]

This patch updates the Chromium side of the Blink platform changes,
removing method variants we no longer use, and converting the whole
thing to work on hostnames rather than WebURL/WebSecurityOrigin.

Patch 1 - Blink:
Patch 2 - Chromium: [THIS PATCH]


Review URL:

Cr-Commit-Position: refs/heads/master@{#311471}


Project Member Comment 23 by, Jan 14 2015
The following revision refers to this bug:

r188387 | | 2015-01-14T15:24:32.453986Z

Changed paths:

Tag ResourceRequest as originating from reserved IP ranges.

This patch tags ResourceRequest objects (and therefore WebURLRequest
objects) as originating from reserved IP ranges if the document from
which they're fired is tagged as such. This will eventually allow the
network stack to make intelligent decisions about whether or not the
request should go through after DNS resolution, in the presence of an
as-yet-to-be-introduced flag.


Review URL:
If the connection to the local device is over HTTPS, should the user agent still mark the connection as mixed content?  I don't see why it should.
Can somebody verify that this won't break nacl based chrome apps that redirect back to localhost
Depends on your setup.

If you're using to talk to a NaCl app that the user is running, and it's listening on localhost, then it does have the _potential_ to break such apps, yes. Information is still being gathered as to the correct approach and what legitimate use cases might exist for this communication, weighed by the risk of wide-spread and targetted attacks that users' face from sites like attempting (and succeeding) in XSRFing their home router or mapping out their corporate network.
Blocking: chromium:458362
Labels: Cr-Enterprise
Adding in an Enterprise label, since this and bug 458362 will require some degree of coordination to make sure they update their policies before we ship any blocking behavior to stable.
There should be no mixed content warning when connecting to a secure websocket server using a dns alias which resolves to a local address.

e.g. Dropbox uses a secure websocket server on localhost running with a TLS certificate for, which resolves to The web app connects directly to the local daemon over secure websocket, and uses this connection to implement their "Open local files in native applications" feature.
That is one of the many cases we're aware of may seem legitimate, but also leads to real user exploits. Setting a bar for "DNS alias" doesn't address the security concerns at all, because I could easily register and use it to compromise you. This is why such loads are also planned to be blocked.
Comment 31 by, Mar 16 2015
Chrome Message Passing provides a more secure way for a web page to communicate with a local application, and is documented here: and

There won't be any permanent exceptions once the new policy goes into effect, and we don't like temporary exceptions because there is a tendency for people to use them as an excuse to put off fixing things.
@rsleevi My comment was not regarding DNS alias in general, but a very specific scenario:

1. DNS alias to localhost
2. Valid SSL certificate for FQDN
3. Secure websocket server on localhost

Where all three of the above are met, then this should be allowed and not blocked (as a permanent and not temporary exception). Applications such as Dropbox will obviously still need to further mutually authenticate the wss connection.

Or do you intend to block this specific scenario too? If you do, then there will be no way at all for valid applications to communicate with valid localhost daemons, and you will force them to run server proxies (which will only open up more security risks, e.g. trying to identify whether two connections originate from the same machine behind a NAT).

@ricea I don't think Chrome Message Passing is an acceptable solution at all really. It's proprietary and there is no independent implementation by any other browsers.
@ricea: How is this different from "ActiveX is just so much better, and you should use that instead"?

(In other words, I smell another IE4 brewing - a proprietary solution only working here, this time under the Chrome banner - ironic that it's Chrome, having started as the "let's-abandon-all-those-proprietary-extensions, HTML5 is the way forward," turning into One Browser To Rule Them All.)
Comment 34 by, Mar 16 2015
I request that this bug be closed with WONTFIX. It has no validity from a security aspect and will only break existing workflows.
Comment 35 by, Mar 16 2015
I have same scenario as "jonadir...", and even though native messaging could be a way for me to communicate with native daemon, it would require additional extension to be installed/developed, and it would only cover one browser. it is introducing extra complexity for development team and end user, also doesn't give me an ability to cover every browser, as not every browser has "native messaging" alike api.
Would this break my comment system (which uses an iframe to provide comments from a local service)?


Comment 37 by, Mar 16 2015
I wonder how this affects proxy implementations? 

Many times over the years I have set a socks/http proxy in the browser and then navigated to localhost for some intranet reason. You can argue about the best practice of this or not but it happens.
Comment 38 by, Mar 16 2015
arne.babenhauserheide: if it always requires the user to be running a freenet node on their private network, then yes. Sorry.
I'm also concerned about this breaking legitimate use-cases. This seems like a case where an Access-Control-Allow-Origin allowing the requesting site should be respected, at least.
If this were implemented, would there be any cross-browser way for a webapp to communicate with a locally running daemon? I'd accept an "Allow this page to talk with local services?" prompt, but even that feels somewhat intrusive given how common a use case this is.
Re: comment 32: Even if we set those as the conditions, they would be impossible to meet. CAs are not allowed to issue such certificates.

re: comment 34: It is well documented the significant security risks, so no, we won't be closing with WontFix. A relatively accessible explanation of one such attack is, but there are many like it.

re: comment 37: This doesn't prevent you from navigating to localhost. Only from a website embedding resources from localhost.

re: comment 39: Yes, we are exploring a "secure by default" mode that may block all requests without some explicit opt-in of the device. ACAO is one way, but that presumes the device can't be compromised by a CORS preflight. It also requires sending a preflight for simple requests (such as GET), due to the fact that many of these routers have botched HTTP and failed to respect the idempotency of GET requests. There is still interest in allowing *some* communication.

re: comment 40: See comment 39. As noted elsewhere in this thread, Firefox is also attempting to block such communication, and they have not made any commitments or comments on allowing limited access.
Comment 42 by, Mar 16 2015
Will this affect/block XHR requests to localhost where the response includes a permitting Access-Control-Allow-Origin header? So many legitimate use cases with that :(
FYI, this change would affect makes XHR requests to, which resolves to That port is backed by a web server running inside the GitHub for Mac application. We use it to detect the capabilities of the installed version of the application (if any), and based on that present different UI to the user. For example, if you have a new enough version installed, we present a "open this file in GitHub for Mac" button on certain pages, which when clicked uses a custom URL scheme to tell the application to clone the repository containing the file and then open the file in the default application for that file type.
@rsleevi "Even if we set those as the conditions, they would be impossible to meet."

Dropbox actually have a certificate for issued by Digicert. If the browsers accepted self-signed certificates for localhost servers (without users modifying their trust store) then this would not be necessary. But until browsers add support for that, this is the only way to go about it. Admittedly, it's not pleasant but it works.

I don't think routers are running over TLS or over WS for that matter which has been the only security issue raised thus far.
Re comment 44: That a CA issued the certificate is unquestionable (and ditto for What is questionable is that they did it at all while complying with the root program requirements.

This is not the bug to get into that discussion, but it is simple to say that I don't believe CAs are permitted to issue such a certificate under the current Baseline Requirements, and, if they do, that such certificates are required to be revoked.

This is why it is not and will not be an acceptable solution.
Comment 46 by, Mar 16 2015
This will break embedded applications on the Google Fiber network.
As far as I understand, those requirements regarding reserved IP addresses and internal server names specifically concern the common name or SAN, but not the IP they resolve to, so that certificates for and should be perfectly valid?
Re: Comment 47, no, that's not correct, but this is not the best bug for that discussion, so I'll refrain from explaining why.
@ricea: It will at least require users to be running a local Freenet if they want to write comments.

The service already runs in an iframe to give isolation against the site embedding it.

I understand why you would disallow the local network (many legacy devices with dubious security which cannot be fixed by the user), but not why you do it against localhost. Localhost is not an outdated router. It’s the computer which runs the browser, so if it runs an up to date browser, the services running on it should be similarly up to date.

Is there a way for a service to explicitly tell the browser that it is OK to access it? “I know what I do, treat me like a webserver, please”?
Basically, what this means is this:

You've decided to remove the one thing left for developers to write an application that was natively able to talk to browser content: NPAPI. And the immediate next step is to remove the last resort that was cross browser: WebSockets for this type of communication.

Now, we're really screwed and have to start writing a solution for chrome as a separate thing. Just to be able to do that: talk to a local app that was installed á priori to a user going on our website. I'm really frustrated how this  is keeping to make us rewrite stuff on our end... :/ you should at least come up with a cross browser solution. 
re: comment 49: See the initial description, comments 10, and 11. Allowing (arbitrary) communication to localhost can absolutely lead to machine compromise/exploit, and you shouldn't have to worry about your browser allowing to root your machine.

I realize "localhost" may be a special case from routers, but there's still very real security risk from local daemons that are expecting that localhost connections are, well, local - and allowing any website to talk to them can cause real harm.

I think we're still exploring what the UX would be for such blocking. It may be that a special "insecure" command-line flag be allowed. Alternatively, it may require interstitials. Or we may require the localhost daemon to explicitly opt-in to the remote communication (the Origin header is simply insufficient, since it's impossible for a client to know whether a server is checking it; explicitly making the server opt-in resolves this concern)

Regarding blocking, this work is actively being done with cooperation and consideration from other user agents (aka browsers). Collectively, browsers recognize the security risks of the legacy feature. Rather than "Don't block localhost", it's important to think about what considerations are needed and steps can be taken to "Make communication to localhost sufficiently secure". That's a far more productive conversation, and likely belongs in the W3C (in the context of the Mixed Content spec [1], and particularly [2]). You can find out where and how to send feedback at those links.

Is this ticket the best place to be discussing the UX around the localhost service whitelisting, or is that conversation happening somewhere else?
Probably Issue 412058 or  Issue 362214 , both of which are about having the code reflect our notion of "secure origin"
 Issue 362214  is only tangentially related to this issue, and I'm getting a 403 error on Issue 412058, so for now, I'm going to post my thoughts here. Sorry if it seems like I'm making a mountain out of a molehill, but implementing this feature poorly could result in me having to write heinous workarounds for what I believe to be a relatively common use case.

The following are a mix of new and existing ideas about how to implement whitelisting for webapps that need localhost access. I'll be using Dropbox as my go-to example of an app which requires this kind of functionality.

1) Do nothing, forcing the communication to happen through a remote server or a browser extension.

This should be a no-go for all the obvious reasons.

2) Require a command-line flag

This would work for enterprises, but it would be unreasonable to ask of your average Dropbox user.

3) Prompt the user with an interstitial

This is a pretty good option, but users who don't understand the warning might click "Okay" to everything. Also, it might scare Dropbox users into thinking the app is doing something malicious.

4) Require a special handshake from browser-interoperable services

While this solves the problem of whitelisting and ignorant/scared users, it puts a pretty heavy burden on app developers who now need to support a new protocol, especially businesses who might not even have access to the source code for some of their running services.

5) Require a whitelist file that the browser checks for localhost queries

This is currently my favorite option. It solves the problem of whitelisting and ignorant/scared users, as well as requiring very little intervention from app developers. Developers would just have to add a bit of setup logic to add a file telling browsers which ports/domains to whitelist, and enterprises who don't have access to the source code of their running services would just have their sysadmin add the file manually. It'd require a bit of collaboration to do this in a cross-browser compatible way, but even having to add one of these files per-browser is a fairly low overhead compared to the other alternatives.
A local whitelist file solves the problem for localhost services (albeit in an ugly manner), but having some sort of HTTP-based handshake seems reasonable here. CORS was designed for this same sort of concern, and extending it to include preflighting for GET requests on reserved addresses (where GETs are known to sometimes be insecure [freaking routers, man]) seems reasonable, as would some generic Flash-esque "crossdomain.xml" file that a single request could be made to. This would also allow cases where the server is on the local network, but not actually on localhost (and would therefore be unable to install a whitelist file). For example, the Plex webapp can connect to servers on the local network.
Re: mlacorte's proposals:

An additional option:

6) We stop conflating {localhost, routers, legacy apps} with {secure websocket server, localhost, valid certificate, public FQDN}, and this ticket applies only to the former, not the latter.

@rsleevi: "Re: Comment 47, no, that's not correct, but this is not the best bug for that discussion, so I'll refrain from explaining why."
I have confirmed with Digicert that the Baseline Requirements are being met by Dropbox. Since you claim that the Baseline Requirements are a blocker to the Dropbox secure websockets on localhost method, your explanation would be helpful here.

@rsleevi "the Origin header is simply insufficient, since it's impossible for a client to know whether a server is checking it"

It's also impossible for a browser to know whether a public websocket server is checking the Origin header. Does that mean Chrome should show interstitials or block all public HTTPS traffic? Websockets were designed with exactly this in mind and contrary to what you claim, the Origin header is sufficient (provided the client is a browser, which in this case it is). It would be better and more positively received to continue educating and evangelising developers on how to use Websockets properly, than to start blocking Websockets completely.

If you are still not satisfied with this line of reasoning, then I propose that this ticket be blocked until a reasonable working cross-browser solution is on the table. It makes no sense to sink the ship without first providing another.
re comment 56: Again, this is not the bug for that discussion. I'd like to kindly ask to keep it focused on the particular issue at hand (Blocking sub-resource loads from the web to private networks), without unduly conflating it with discussions of certificate policy.

Regarding "Doing nothing", I've already addressed why we feel the security considerations of this are unacceptable. I've also described why the Origin method is, from a security perspective, unacceptable.

We're absolutely committed to find working decisions. I think it's an unfair, incorrect, and misrepresentative assumption to suggest we're taking unilateral action to block this, without recourse. We're absolutely working to understand the scope and impact. This is something that previous browsers have implemented (as seen by comment #10/#11), and other browsers are also exploring. That said, the security of our users and the Internet is first and foremost of our concerns, and we will work to prioritize those needs first and foremost.

At this point, I don't believe any new information is being shared, so it may be best to refrain from commenting further. We certainly hear and understand these concerns, and have long been aware of them, and are working to balance compatibility with other concerns, such as security, as we would any and all features. This bug is a bug to track efforts towards that goal, of which "doing nothing" is not on the table.
Shall we have a public version of ? Is there any good thread to reply to at public-webappsec regarding this point? Then, we should move the place for discussion to there.
tyoshino: Right, we should create a public Issue 412058 that doesn't have the Google-specific services affected. That should be no big deal.

re: MIX - The initial WD formalized this behaviour ( ), but that was removed from the ED ( ) because it was not yet implemented by UAs, and the goal was first to make MIX describe the world as it is (interoperably) before it tried to change it.

The thread in which this was discussed was - and yes, Dropbox did participate in that thread as well (e.g. )
Comment 60 by, Mar 17 2015
My thoughts on this:
- Including websockets to IP blocking overshoot your goal. The Linksys password hack was done with simple GET requests. Not by utilizing (Websocket: Upgrade: websocket) requests.
- It can be assumed that http server supporting websockets checks origin (especially for password protected systems). When dealing with WebSockets upgrades, there are much more complex thing to deal with than a simple origin check.
- You will destroy any existing P2P techniques using a browser. Especially for IoT P2P should be promoted not eliminated.
My suggestion: Leave websockets complete out of this IP filtering. Its was the intention of WebSocket to support this scenario.
@rsleevi,ricea: I think solution 4 (in a style like “click to flash” and 2-click-privacy¹) would be easy for users, because they already know it from other technology. It would not work for getting small images or such from localhost, but for larger images or an iframe which shows the local service interface it would provide security.

Also most attacks would be twarted because every new access would require new user interaction, so brute forcing would no longer be an option.

So I would not object to this change if it were done via option 4 for regular users (and it would patch a security concern which keeps us from deploying an interesting plugin for Freenet).

+1 for WONTFIX ... I think this is going to far more legitimate use cases than it will fix security issues.  Systems I am working on will be affected in breaking ways.
At Spotify we use XHR to localhost [1] to provide nice integration between web properties and the local client (if installed).
We can see some security benefit here, but it seems to us that the majority of the security benefit comes from blocking the local subnet. Including a block for localhost breaks a lot of valid use cases (including us) while providing, as far as we can see, little security benefit.

Vote for WONTFIX. This will break so many services and only leads to ugly insecure workarounds.

There shall be prior consensus on how to approach this, across browsers, before making changes. Other browsers did similar changes "for security" and reverted the changes back (with further bugs) 1) not adding any security, 2) breaking existing services, 3) making any localhost<->browser solutions unreliable. 

Another service that would break is Akamai Download Manager (aka NetSession)
I also vote for WONTFIX, this has the potential to break our product that is used by millions of student
Comment 67 by, Mar 19 2015
For the record, Opera supports this change. Opera used to have this protection, but unfortunately had to let go when we switched from Presto to Chromium. We want to make our users safe again, and this is how it can be done. There were some teething problems, but once those were ironed out, none of the fallout people worry about in this bug. Se comment #6.

For those of you concerned that things might break, you can always test an existing implementation. You can download Opera Presto from here: (Windows) (Mac and Unix)

If things don't work for you in Presto, please be specific about what applications and use cases break.
The title of the Bug is "Block sub-resource loads from the web to private networks and localhost" but it seems most concerns are related to use cases that are the other way around i.e. a web application hosted on the public web establishing sub-resource connections to private web.

Can anyone clarify?

Is the proposal to block ALL cross-origin sub-resource loads where different origins are in different subnets (public internet, private intranet, localhost)?
Is the proposal to ONLY block applications on localhost and/or private intranet from making sub-resource loads from the public internet (as the title suggests)?
Presto's CNP system put (and still puts, thanks to embedded devices still using that engine) a significant undue amount of work on the Plex Web TV team. It is extremely difficult to access a server on the local network that is explicitly trying to allow such communication on that platform. Anything Chromium does in this area absolutely must not repeat those failures.
#67: Any embedded Spotify widget; e.g. the one here

Allows the user to control the music if the Spotify client is installed.
There may be some light in the tunnel :-)
Comment 72 by, Mar 21 2015
Could someone please clarify whether this suggested change is for web pages only or if it will also affect extensions? 

I understand extensions can use the Chrome Message Passing API within the global page, but it's unclear to me if this issue will force extensions to use it or if it's still optional. 
Comment 73 by, Mar 23 2015
Additional stuff that might break is public websites that show information overlays or additional developer tools using internal endpoints when on the private network and accessing the public website (at the very least, a "click here to report bug" function). I am actually surprised Google doesn't have anything like that. I really hope Blink gives applications some way to do this and then deprecate existing solutions.

Netgear use pointing to in their getting started documentation to help customers setup their router.
Comment 75 by Deleted ...@, Mar 23 2015
The dutch company 'Topicus Zorg' produces health-care software that uses web-sockets to communicate with localhost to connect with external authentication products, and health-care specific software/hardware. 

We use an OAuth-like solution to authenticate the Origin (SaaS server) with the server on localhost. 

We need to operate cross-browser and preferably with open standards. Therefore we cannot accept Chrome's proprietary plugin architecture.

What kind of alternatives does the Chromium team offer to provide this kind of functionality?
Is this still valid or is this yet another thing Google suddenly drops?

Labels: Restrict-AddIssueComment-EditIssue
re: comment 71: Simply stated, I disagree.
re: comment 76: This has already been answered in previous comments.

I'm adding Restrict-AddIssue-Comment to this, for now, not because we do not wish public feedback (we do! we really do!) but because much of the feedback is misinterpreting the nature of this bug or its goals.

Simply stated:
This bug is a tracking bug in our efforts to protect users from hostile webpages being able to compromise their local network devices or their local machine. We're aware of many attacks targeting such devices and users, and this has been true for many years. Other browsers are also aware of this and have been making similar efforts.

No, we will not unilaterally remove support, because we're aware of legitimate use cases and needs for such communication. However, these use cases currently rely on mechanisms that are impossible to distinguish from attacks, or to ensure users are properly secured, and thus not a viable long-term path.

Discussion about what a viable long-term path means will happen in the W3C. The most likely place for this discussion is where it was first considered - public-webappsec@. However, public-webappsec@ is presently not chartered for this discussion. That is OK though, because we will not be making action to block these loads until there is a viable, standards-track (although perhaps not necessarily yet standardized) solution that enhances security and preserves utility.

If you're interested in this topic (and I certainly understand so), star this bug. When there is more to happen - including an appropriate place to discuss and gather use cases and consider alternatives - we'll update this bug and direct you to it.

In the meantime, we'll continue to gather metrics and telemetry to inform our own thoughts on what the policy might look like or how it might be best implemented. We're committed to the security of our users, and plan to take action in this space, but we're also cogniscent of the use cases, and plan to ensure viable paths.

If you currently rely on such communication, you MUST be prepared to update your products at a point in the future to whatever is developed to mitigate these issues. The current solution IS NOT viable long-term.
An example of why 'localhost' legacy is insecure:

If and when we have such a way for the app to opt-in, we have a better chance of informing the user (e.g. wants to talk to this service), making it easier for the service to implement security (e.g. not botching the URI checking), better ensuring the origin is authenticated (e.g. HTTPS), and overall providing more informed and secure decision making.

I realize it may seem a bit unfair to comment here, but I did want to share a little bit more about the real world concerns underlying this, and why we are committed to finding a long-term solution that is viable for valid use cases, that raises the bar for security back to what users expect of their web browser, while also closing off trivial attacks that shouldn't be happening ;(
Comment 79 by, May 29 2015
Comment 80 by, Nov 11 2015
Comment 81 by, Nov 26 2015
Labels: Cr-Blink-Network-WebSockets
Comment 82 by, Nov 27 2015
Labels: -Cr-Blink-WebSockets
Comment 83 by, Nov 27 2015
Labels: -Cr-Blink-XHR Cr-Blink-Network-XHR
Labels: Cr-Blink-Network-FetchAPI
Re: #85 - Not sure if this is Fetch specific? If something like this went in I'd assume that it's generic for any network requests including ones triggered from things like image subresources.
Comment 88 by, Feb 29 2016
Blockedon: 590714
Issue 594825 has been merged into this issue.
Issue 603629 has been merged into this issue.
I think perhaps we should bump the priority of this. mkwst, thoughts or feelings?
mkwst@, what's the current status/plan of this issue? Predictability program has set an OKR to gain traction on the top 50 starred bugs this quarter: either by closing them, stating what milestone the fix will ship, or setting a NextAction date so that we know when to check back in.
mkwst@, ping to see what the plan is on this one. Please note this is the #18 most starred bug in Blink currently.
NextAction: 2017-04-01
It's on my roadmap for this year, but it's not going to get done this quarter. I understand that it's desired (also by me!), but it's also a ridiculous amount of work. Let's circle back in April to see where we are.
Thanks! Glad to hear it's on the roadmap. :)
Issue 704664 has been merged into this issue.
Issue 714988 has been merged into this issue.
Sign in to add a comment