New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 722108 link

Starred by 1 user

Issue metadata

Status: Assigned
Owner:
Buried. Ping if important.
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Feature



Sign in to add a comment

Consider adding SMB to port blacklist

Reported by zack.wei...@gmail.com, May 14 2017

Issue description

UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:53.0) Gecko/20100101 Firefox/53.0

Steps to reproduce the problem:
Please consider adding the ports used for Microsoft's SMB and NetBIOS protocols (137, 445, possibly others; 139 is already blacklisted) to the blacklist so that JavaScript cannot initiate connections to them. This is prompted by the MS17-010 exploit, but there's quite a long history of remotely exploitable SMB bugs. This will, at least, make it harder to scan for exploitable systems from a malicious website.

What is the expected behavior?

What went wrong?
see above

Did this work before? No 

Chrome version: 58.0.3029.110 (Official Build) (64-bit)  Channel: stable
OS Version: OS X 10.11
Flash Version: 

Originally reported as a Fetch spec issue at https://github.com/whatwg/fetch/issues/544
 
Components: Internals>Network
Components: Blink>Network>FetchAPI
Generally, we try to update kRestrictedPorts to match spec updates or failing that, an Intent-to-Remove. 

Comment 3 by aarya@google.com, May 15 2017

Labels: -Type-Bug-Security -OS-Mac Type-Feature
Owner: mkwst@chromium.org
Status: Assigned (was: Unconfirmed)
Feel like security feature than a vulnerability, taking off security queue. Mike, thoughts ?

Comment 4 by mkwst@chromium.org, May 15 2017

Cc: rkaplow@chromium.org isherman@chromium.org
I do think it's worth adding some metrics here to see how flexible we can be with regard to potentially dangerous ports.

So, isherman@, rkaplaw@, etc: How sad would the metrics team be if I added a histogram with 65k buckets to measure the prevalence of each port's use when loading files? *cough* Two of them, really, to distinguish top-level navigation from subresource loading? :) Can I assume that there might be a better way to get this kind of data?
We have sparse histograms. For a given client, how many different values will they be usually emitting (within a 30 minute period, say)? I assume very few - a single client will only be using a few ports?

And also - across the entire client base - will all ports end up being used, or will it mostly be the same subset? Is this platform dependent?

Comment 6 by mkwst@chromium.org, May 15 2017

Components: -Blink>Network>FetchAPI -Internals>Network Blink>Loader Internals>Network>HTTP Blink>SecurityFeature
Labels: -Restrict-View-SecurityTeam OS-All
(Actually opening up the bug)

rkaplow@: Thanks! For all clients, I'd expect the vast, vast majority of values to be either 80 or 443.

Particular clients would likely use a particular set of obscure ports for their own internal applications, etc, but I imagine that each individual user would have a small number of ports recorded.

My intuition is that across the entire client base, we'd see clusters around particular ports (like, Chromium developers might be likely to run Blink's layout tests, which use the ports 8000, 8001, 8080, 8081, 8443, and 8444), but I'd be pretty surprised if we saw every port used.

What are the tradeoffs for sparse histograms that I need to be aware of? It sounds like they might be a reasonable fit, but the overhead worries me a bit from a performance perspective. Naively, we'd be poking this counter once per request, and requests happen a lot.
Sparse histograms are backed by a map, and use locks to guarantee thread-safety.  So, the performance concerns would be primarily around those locks -- if you're not worried about the overhead from locking, then it should be ok.  (I'm actually not sure how much overhead there is from locking...)  You could probably implement some local optimizations to work around this if it's too much overhead -- say, by saving off requests into some temporary data structure, and then periodically flushing that data structure to the histogram.  But, naïvely, I'd hope the overhead is low enough to where you shouldn't need to pull such machinations =)

From the metrics team's perspective, we tend to have two main concerns:
  (1) On a per-client basis, are we going to burn through too much memory to back the histogram?
  (2) Server-side, across all clients, will the histogram be too expensive to work with?

It sounds like we're probably fine for (1), as most clients will encounter only a handful of ports.  We might be okay for (2) as well, though it's hard to be sure ahead of time -- maybe there are clients that run some sort of test setup where they repeatedly connect to random ports.  Then we'd see the full spectrum of ports server-side every day, which would be too much.  You could still report the histogram; we just wouldn't process it in our pipelines, and you'd need to run manual Dremel queries instead.
Is WebSocket also affected?

Comment 9 by mkwst@chromium.org, May 16 2017

> maybe there are clients that run some sort of test setup where they repeatedly connect to random ports.  Then we'd see the full spectrum of ports server-side every day, which would be too much

Thinking about this again: we will absolutely see every port used. Maybe every day, albeit in low volumes. Because folks use WebSockets to do port scanning, which is the reason we want to lock down access to specific ports in the first place. :)

Why don't I send you a patch, and we can discuss the implications on the pipelines, etc there in more detail.

> Is WebSocket also affected?

If we add a port to the bad ports list, yes.

Comment 10 by ricea@chromium.org, May 23 2017

Components: Blink>Network>WebSockets
I am in favour of blocking the ports.

There will be connections to these ports, and some people will be broken if we block them.

What would the connections-per-user ratio have to be to change our mind about whether or not to block?

I'm adding WebSocket to the components because it's relevant and so that I don't need to triage this issue again. This is not an offer to implement it :-)
Labels: Hotlist-EnamelAndFriendsFixIt
Labels: -Hotlist-EnamelAndFriendsFixIt
Components: Internals>Network
Components: -Internals>Network>HTTP
Given that metrics for ports doesn't seem workable, is it worth just trying out what blocking 137, 138, and 445 would do and monitor fallout as appropriate? Seems slightly preferable over inaction to me.
Cc: annevank...@gmail.com
Cc: zentaro@chromium.org
We use 137 for NetBios discovery of network attached storage devices on ChromeOS- would this change have any impact on our ability to use netbios? 
I suspect that from privileged APIs none of https://fetch.spec.whatwg.org/#port-blocking are blocked. At least, from the perspective of the standard it only matters what happens with web content.
Cc: baileyberro@chromium.org

Sign in to add a comment