Project: chromium Issues People Development process History Sign in
New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 66 users
Status: WontFix
Owner: ----
Closed: May 2010
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 2
Type: Feature

Blocking:
issue 19177

Restricted
  • Only users with Commit permission may comment.



Sign in to add a comment
Match Firefox's per-host connection limit of 15
Project Member Reported by eroman@chromium.org, May 15 2009 Back to list
Chrome limits the number of connections per "group" to 6.

This is particularly bad when using a proxy server, since there can be at 
most 6 open connections across all tabs. This gets pretty ugly when you are 
using sites like GMAIL, since they leave connections open (long running 
xmlhttp), and pretty soon you are running on empty and nothing loads.

Browsing through the mozilla documentation, it appears that Firefox 3 has 
boosted the connections per host to 15 (which is more than twice our 
limit):

http://kb.mozillazine.org/Network.http.max-connections-per-server

We should at the very least consider this increase for proxy servers, and 
preferably across the board.

Note that this user experience was reported by a gmail engineer, since they  
keep multiple gmails open and run into this problem frequently.
 
Comment 1 by eroman@chromium.org, May 15 2009
Adding some more cc's of people that have investigated this in the past.
Labels: Mstone-3.0
Status: Assigned
Labels: mstone4
Labels: -mstone-3.0 mstone-4
Labels: -mstone4
here are the default values in Firefox (3.5) :
network.http.max-connections 30
network.http.max-connections-per-server 15
network.http.max-persistent-connections-per-proxy 8
network.http.max-persistent-connections-per-server 6
network.http.pipelining false
network.http.pipelining.maxrequests 4
network.http.proxy.pipelining false

what about the default values for Chrome?

and what about pipelining and proxy-piplining in Chrome?
and per-user configuration of the values (AKA Firefox's "about:config")?
What if the proxy itself limits the value of the max-persistent-connections-per-user,
what will a user do then, if you increase the value of
max-persistent-connections-per-proxy to a high value, then it will be detrimental to
the user in such a scenario. The best solution would be to keep a high default value,
but give the option to the user to configure his own values, in my opinion. 

See related - http://code.google.com/p/chromium/issues/detail?id=19052
personally, i would like the ability to reduce that figure as well.. this is something 
that firefx allows in the about:config dialog, and something thats very helpful in my 
own situation (trying to run a web based php script that breaks easily with higher 
values of etwork.http.max-persistent-connections-per-server) - on my own machines, I've 
set it down to 3 connections per server.
Comment 9 by jam@chromium.org, Sep 28 2009
 Issue 9939  has been merged into this issue.
Labels: -mstone-4 Mstone-5
Comment 11 by oritm@chromium.org, Dec 17 2009
Labels: -Area-BrowserBackend Area-Internals
Replacing labels:
   Area-BrowserBackend by Area-Internals

Status: Available
Comment 13 by Deleted ...@, Feb 11 2010
As a Chrome user, I need to change the max connections to something high. Please add an 
option in the network config in the Chrome settings dialogue.
Labels: -Mstone-5 Mstone-6
I'm doubtful that we will ever have a UI to change these settings, but perhaps there is 
an alternate solution in the back of people's minds keeping this bug alive.
Just to summarize the thread in more concrete terms:

    *** Chrome allows 6 active connections per host, whereas Firefox allows 15 ***

Firefox's connection limit policy is more fine grain, in that it applies a separate limit for persistent (keep-alive) 
connections. So whereas both browsers limit to 6 persistent connections, Firefox can go up to 15 total. This gives 
website authors more flexibility, since they can choose to send a "Connection: close" header on long lived 
HTTP/1.1 response to avoid it counting against the persistent connections limit for that domain.
 Issue 38088  has been merged into this issue.
Summary: Match Firefox's per-host connection limit of 15 (was: NULL)
Labels: -Pri-2 -Mstone-6 Pri-1 Mstone-5
This is a 5.0 blocking issue (confirmed with Darin)
Labels: -Type-Bug Type-Feature
Is this causing some specific, important problem?  Is it the issue reported by the gmail engineer?  Supporting the 
functionality eroman describes in #15 is non-trivial.  Doing a simple bump of 6 to 15 is trivial.  Are we proposing 
finishing eroman's described feature by Mstone-5?
@willchan: I believe it was prioritized due to http://b/issue?id=2509291
In the network team meeting it was suggested that we should just remove arbitrary 
limits, however I think we should have a per host connection limit for two reasons, 
to not overwhelm a host with too many requests (also the issue raised in comment 7).  
And, as Will pointed out, bandwidth can be an issue if the number of concurrent 
connections is too high.  The ideal number of concurrent connections is probably a 
function of the server's load and of the user's bandwidth.

Raising the limit to fifteen from six doesn't seem to actually solve the issue, it 
would just push the problem out further.  But if a server sends a Connection: Close 
header, it is saying that it understands the resource issues and we shouldn't worry 
about overburdening the server on that connection.

Aside from doing experiments, I think a reasonable solution would be to have 
Connection: Close'd connections not count toward the per host limit (and allow an 
unlimited number of them).

Ideally, we'd monitor the user's bandwidth and the perceived health of the server and 
modulate the per host connection limits based on that, but I'm not sure the 
effort/reward ratio is high enough.

First, it's useful to note that the real solution for the specific web app seeing this problem 
is to switch to using Web Sockets.

Second, it's useful to note that raising the limit from six to 15 doesn't actually solve the 
problem.  It just pushes the problem out further.  But the qualitative difference between 6 
and 15 is still significant for certain web apps.

I'm a bit concerned about disabling limits for Connection: Close.  If any HTTP 1.1 proxies are 
currently sending Connection: Close to clients, then this might result in overloading proxies.  
On the other hand, 15 for non-persistent connections is pretty arbitrary.  It seems broken for 
web apps to abuse this different connection limit to get more connections, since they're 
probably just reopening the connection after it closes.  It seems better to me for them to 
implement hostname sharding, because then they get persistent connections too.  It's wasteful 
for them to keep opening and closing connections.  If they really understand their resource 
issues, then go ahead and use another hostname.  Really though, they should switch to Web 
Sockets.

I think we shouldn't have a separate connection limit for Connection: Close.  We should run 
experiments to see the performance impact of different connection limit.  I'm not sure if this 
needs to block Mstone-5 since I think the web app can work around it with domain sharding.
I agree with Will.

I find the differentiation of persistent and non-persistent connection limits as 
strange.  Where in the HTTP specification does it say these types of connections 
should be treated differently?

I realize browsers aren't obeying the HTTP/1.1 spec regarding connection limits 
anyway, but here is what the spec does say:

"Clients that use persistent connections SHOULD limit the number of simultaneous 
connections that they maintain to a given server. A single-user client SHOULD NOT 
maintain more than 2 connections with any server or proxy."

There is nothing in the HTTP spec which says that if a server uses the connection-
close header that it is okay for the client to barrage it with tons of connections.

Comment 24 by darin@chromium.org, Mar 17 2010
I agree that we should not allow an unlimited number of connections to an origin server or proxy server.

If you consider back when HTTP/1.0 without keep-alives was the norm, then more parallel connections were needed to get decent 
throughput.  The advent of keep-alive meant that it was actually advantageous to reduce the number of parallel connections to exploit 
more reuse of the already opened connections.  I don't know what the stats are these days on servers that use keep-alives and those that 
do not, but about a decade ago it was fairly mixed.  So, it made sense to have two tiers.  Back then, the limits for Netscape (and later 
Firefox) were 8 total connections per server with a max of 2 keep-alive.  IE had similar limits in my testing.

Today, I think it is still the case that if confronted with a HTTP/1.0 server that does not support keep-alives that we would want to open 
more connections in parallel.  We also still want to reuse existing connections aggressively, but we are less concerned with reducing the 
number of parallel keep-alive connections.  Everyone has bumped the limits on keep-alive connections up considerably (2 -> 6).

There's not much difference between 6 connections per server and 8 total connections per server.  This is why I originally coded our 
connection pool to not implement a distinction between keep-alive and non-keep-alive connections.  But, Firefox bumped their total limit 
of connections per server way up, and now there is a big difference between 6 and 15 :-(

I'm somewhat supportive of implementing two tiers (like Firefox) because there are non-keep-alive servers out there, and it is a useful way 
for sophisticated servers to implement hanging GET w/o losing a keep-alive connection.  That said, it would be more useful if combined 
with a shared worker so that an origin server can restrict the number of hanging GETs it issues.  Just use a single hanging GET from a 
shared worker!
Darin - if a site needs faster perf, they should be using HTTP/1.1.  I don't believe 
there is a relevant segment of servers which are both using HTTP/1.0 and also wanting 
more concurrent connections such that it would warrant the complexity in our code and 
disregard for RFC recommended connection limits.  If there are any such sites, why 
can these sites not employ:
   a) use of multiple subdomains
   b) use of HTTP/1.1
?

I think we need some data on any server that needs this functionality and where the 
above two workarounds aren't better anyway.
Comment 26 by jar@chromium.org, Mar 17 2010
A coworker pointed out that some users' firewalls attempt to prevent out-bound 
internet attacks by blocking "excessive" simultaneous connection attempts.  The thing 
to google for is "syn flood to host".  

This would-be defensive measure (a.k.a., firewall/router bug??), which probably dates 
back to when syn-flood attacks were more plausible, watches for "too many" 
unacknowledged syn packets.  When the user exceeds the pending quota, it blocks the 
outbound connection attempts, with the expectation that a worm/virus on an intranet 
is attempting to attack a host.  In the extreme, the router can disconnect a user 
from the internet, and not just tear down or break a few connections :-(.

As a consequence, we should probably be VERY careful about raising this limit. For 
some users, raising the connection limit may help a lot... but for other users, it 
may (for now) induce a show-stopper bug for Chromium.
Comment 27 by darin@chromium.org, Mar 17 2010
Labels: Internals-Network
Comment 28 by darin@chromium.org, Mar 17 2010
@mbelshe: your suggestions assumes that there are active developers who would actually 
change the site or would even care about performance.  to make web browsing better for 
our users we might be encouraged to open more connections in parallel to servers that 
do not support keep-alives.
I would just like to remind everyone that some of us use proxies, and that this limit 
is (from what I understand) being enforced as a global connection limit for users of 
proxies.

I tend to keep many tabs open.  As a simple test, I just closed and reopened Chrome 
and all tabs were restored within a matter of seconds.  Then, I edited my settings to 
enable my proxy, closed Chrome, and reopened it.  It took some 3-4 minutes to restore 
all tabs.  Although I consider this slowness annoying, this is actually NOT the 
biggest issue.

The big issue is that now my browser is nearly unusable.  I open up a new tab and go 
to "www.cnn.com" and it will take 5+ minutes to load the page.

Why is this?  I believe that it is because my max connection limit of 6 is tied up by 
the various web pages I have open that are holding connections open effectively 
forever.  Occasionally one of these connections closes and my tab trying to load cnn 
gets in a few images before the other page gets ahold of that connection slot to 
reopen another long lived connection.  I'm not talking about active downloads.  I'm 
talking about connections for things like IM and mail notification where the 
connection is established and held open (mostly idle) to receive notifications.

It has now been about 15 minutes, and cnn.com is only about 50% loaded.

A quick glance at my proxy logs shows connections like these:

1268810005.312 240189 10.0.0.254 TCP_MISS/200 335 CONNECT www.google.com:443 - 
DIRECT/209.85.225.147 -
1268810009.713 258744 10.0.0.254 TCP_MISS/200 117103 CONNECT mail.google.com:443 - 
DIRECT/209.85.225.17 -
1268810041.171 315898 10.0.0.254 TCP_MISS/200 4115 CONNECT clients4.google.com:443 - 
DIRECT/209.85.225.102 -
1268810367.998 602684 10.0.0.254 TCP_MISS/200 278452 CONNECT mail.google.com:443 - 
DIRECT/209.85.225.18 -

So, analyzing the above connections to convert the first 2 columns into start and 
stop times we get:

1268809765 1268810005 10.0.0.254 TCP_MISS/200 335 CONNECT www.google.com:443
1268809751 1268810009 10.0.0.254 TCP_MISS/200 117103 CONNECT mail.google.com:443
1268809726 1268810041 10.0.0.254 TCP_MISS/200 4115 CONNECT clients4.google.com:443
1268809765 1268810367 10.0.0.254 TCP_MISS/200 278452 CONNECT mail.google.com:443

Or, even simpler, I'll convert to offsets from the lowest timestamp of 1268809726:

39-279 10.0.0.254 TCP_MISS/200 335 CONNECT www.google.com:443
25-283 10.0.0.254 TCP_MISS/200 117103 CONNECT mail.google.com:443
 0-315 10.0.0.254 TCP_MISS/200 4115 CONNECT clients4.google.com:443
39-641 10.0.0.254 TCP_MISS/200 278452 CONNECT mail.google.com:443

See my point now?  From timestamp 39-279 (4 minutes!) Google alone was holding onto 4 
of my 6 max global connections.  Throw a few other similar sites in other tabs and 
Chrome is pretty much unusable through the proxy.  I find proxies to only be usable 
in Chrome if I keep things down to only a few tabs open at a time.

Disclaimer:  I'm "just" a power user, and my assertions about how this is operating 
or the cause may be incorrect.  I do know that this is not a problem with IE or 
Firefox when using my proxy, and I do know that the problem completely goes away if I 
disable the proxy.  I can even fire up other browsers and access sites such as 
cnn.com through the proxy while Chrome continues to chug along loading 1 image every 
minute or two.

I'd certainly like to see this configurable, but in my opinion even simply bumping up 
the hard limits so that proxies have a max of 15 (and leaving normal direct hosts at 
6) would be a "good enough" fix.  Hope this helps.
@matt: You are talking about a different issue: proxy connection limits.  This is a known issue and it's sort of being worked on (although I can't find a bug thread about it, but maybe eroman knows).  Please 
don't clutter this thread with that discussion.  File a new bug if you want.

Back to the main issue: whether or not to implement different connection limits for HTTP/1.0 or HTTP/1.1 Connection: close.

Let me try to get things clear again since I feel that Darin may have started talking about a slightly different issue.  A certain Google app (see comment #20) is running into our connection per host limit.  
They are relying on Firefox's HTTP/1.1 Connection: close connections having a different limit than persistent connections.  Therefore, they are in essence able to have up to 15 connections.

This is, IMO, a hack.  They should use the more reliable hack of multiple subdomains.  Hopefully in the future they will be able to use Web Sockets or SPDY or something else.

In #24, Darin states: "I'm somewhat supportive of implementing two tiers (like Firefox) because there are non-keep-alive servers out there, and it is a useful way for sophisticated servers to implement 
hanging GET w/o losing a keep-alive connection.  That said, it would be more useful if combined with a shared worker so that an origin server can restrict the number of hanging GETs it issues.  Just use a 
single hanging GET from a shared worker!"  Thus, he notes that there are two reasons to support two tiers: (1) non-keep-alive servers exist (2) It is useful for sophisticated servers to implement hanging 
GETs.

I believe Mike (#25) and I (#22) already addressed (2).  Just use multiple hostnames.  Well, Mike also said to use HTTP/1.1.  Any "sophisticated" server should be doing that.  Darin in #28 says these 
counterarguments only address (2).  So, I think (2) has indeed been addressed and don't think we need to belabor this point (please correct me if I've missed something).

That brings us back to (1), what to do with non-sophisticated non-keep-alive servers.  Darin drives this point in #28 by saying: "to make web browsing better for our users we might be encouraged to open 
more connections in parallel to servers that do not support keep-alives."  There are a few things I'd like to note about this.  This statement does not seem to in any way justify sophisticated Google web 
applications abusing non-keep-alive connections to achieve higher connection limits, and we've already noted the alternatives.  So, we shouldn't have different limits just because of these sophisticated 
applications.  Given that, I also don't see why this needs to be Mstone-5.

So, there doesn't seem to be any immediate issue with Chrome.  But moving forward, there is a deeper question, should we have a higher limit for non-keep-alive connections?  First, I'd like to distinguish 
between HTTP/1.1 Connection: close and HTTP/1.0's default non-keep-alive.   In the first case, keep-alive is the default for HTTP/1.1, so Connection: close indicates that the server either (a) does not 
support for whatever reason [I suspect that it's due to resource constraints] or (b) is trying to abuse Firefox's different connection limits for keep-alive and non-keep-alive connections.  (b) is a hack and 
we've already covered the appropriate workarounds.  (a) probably indicates resource constraints / congestion / etc. since the server is explicitly adding this header, so having higher limits for these cases 
does not seem to make sense to me.  Therefore, for HTTP/1.1 Connection: close, I do not see a good reason to have different connection limits.  For HTTP/1.0 non-keep-alive (the default in 1.0) 
connections, I can see that argument that these servers simply suck, and we might possibly improve our end user experience by having a higher connection per host limit, since these non-keep-alive 
connections will have worse performance since they'll have more tcp connection setup/teardown overhead and spend more time warming up cwnd and thus won't necessarily be able to use the pipe's full 
bandwidth.

Summary:
I think that for HTTP/1.1 servers, there is no point to having a different connection per host limit between non-keep-alive and keep-alive connections.  If you have a sophisticated server and want more 
connections, use hostname sharding for now.
I think that for HTTP/1.0 servers using the default non-keep-alive, there _might_ be a point to having a different connection per host limit for those non-keep-alive connections.  I would advocate 
gathering more data (run A/B experiments).  This should not be Mstone-5 blocking.
Comment 31 by darin@chromium.org, Mar 17 2010
I agree that this does not need to block M5.

By the way, I have lighttpd installed on my ubuntu box, and it seems to use HTTP/1.0 w/ 
"connection: close" by default.  I wonder why.
Labels: -Pri-1 -Mstone-5 Pri-2
Comment 33 by mozo...@gmail.com, Mar 22 2010
What's the point of comparing Chrome to a single threaded browser?! If I want 15 
connections or more, I'll just keep using Firefox. Just please make everything, and 
really everything unlimited, and handle concurrence! What's the point of a 
multiprocess browser that can't have more gmails open? The whole point would be, that 
if 1 works fine, then 100 works slower but fine.
Limiting the number of connections is like limiting the number of threads. Will there 
a program or an ajax application run faster then? No, because the amount of work that 
have to be done, and the amount of data that have to be transferred stays the same.
And if the server wants less connections, let them write a code that uses less. But 
let the opposite happen too!

Recently I've started using Chrome, because a web game (heavy ajax one) was pretty 
laggy under Firefox, and I couldn't even switch to another tab while it was loading. 
Now it runs fine under Chrome, but... one server of my 3 is always disconnected, so I 
have to keep pushing reload, and play the other one while it loads. Better, but I 
think it's still a shame. Not as big as freezing the UI by JS though.

Thanks for building a cool browser for us!
The following revision refers to this bug:
    http://src.chromium.org/viewvc/chrome?view=rev&revision=45896 

------------------------------------------------------------------------
r45896 | willchan@chromium.org | 2010-04-28 18:04:06 -0700 (Wed, 28 Apr 2010) | 23 lines
Changed paths:
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/base/host_port_pair.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/http/http_network_session.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/http/http_network_session.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/http/http_network_transaction.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/http/http_network_transaction.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/http/http_network_transaction_unittest.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/proxy/proxy_server.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/proxy/proxy_server.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/socks_client_socket_pool.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/socks_client_socket_pool_unittest.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/tcp_client_socket_pool.h?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/socket/tcp_client_socket_pool_unittest.cc?r1=45896&r2=45895
   M http://src.chromium.org/viewvc/chrome/trunk/src/net/spdy/spdy_network_transaction.cc?r1=45896&r2=45895

Implement a 15 connection per proxy server limit.

Previously, we had a limit of 6 connections per proxy server, which was horrifically low.
Connections to all endpoint hosts remains at 6.
Basically, we have a map of socket pools, keyed by the proxy server.
Each of these proxy socket pools has a socket limit of 15 and a connection limit per host of 6.
The main TCP socket pool (non-proxied) has the standard 256 socket limit, and connection limit per host of 6.
There are two maps currently, one for HTTP proxies and one for SOCKS proxies.
Note that SSL tunnels over HTTP CONNECTs are still located within the standard http proxy socket pools.
We depend on the fact that our code never returns a non-SSL connected socket to the pool for a |connection_group|
that is to a HTTPS endpoint.  A newly connected socket from the pool will only have the TCP connection done.  An
idle socket will have both the TCP and the HTTP CONNECT and the SSL handshake done for it.

TODO(willchan): Remove the extra constructor overload for the old deprecated parameters.  Switch to using HostPortPair everywhere.
TODO(willchan): Finish SSL socket pools.
TODO(willchan): Rip out the DoInitConnection() code so it can be shared by other caller (TCP pre-warming!).
TODO(willchan): Flush out the socket pool maps when the proxy configuration changes.
TODO(willchan): Fix up global socket limits.  They're slightly broken in this case, since each pool instance has its own "global" limit, so obviously different pools don't share the same "global" limit.  This is such a minor deal since the global limits are so small for proxy servers compared to the system global limits (256 vs 15), that it doesn't have a big effect.
TODO(willchan): Drink moar b33r.

BUG= 12066 

Review URL: http://codereview.chromium.org/1808001
------------------------------------------------------------------------

Status: WontFix
I'm going to close this thread as WontFix since all the comments have sort of been 
vague and will summarize the tentative conclusions:
* Chrome connection per host limit will stay at 6, pending more A/B tests (issue 
44491)
* Servers which need more should use domain sharding for now, but should switch to 
websockets in the future.

Please reopen if this is inaccurate.
Someone please do something here.  Open Chrome, go to gmail, nothing else will load at all.  Even closing the tab doesn't help.  I need to restart Chrome every time I use gmail.  6.0.472.55 beta.  Same with release.

 TCP    WCN8S4K1:18090         iad04s01-in-f189.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18091         iad04s01-in-f83.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18092         iad04s01-in-f83.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18093         iad04s01-in-f189.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18094         iad04s01-in-f83.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18095         iad04s01-in-f83.1e100.net:https  SYN_SENT
 TCP    WCN8S4K1:18096         iad04s01-in-f83.1e100.net:https  SYN_SENT
The connection per host limits should have nothing to do with with other tabs that don't connect to the same host.  I suggest you file a new bug.  Please do not clutter this thread with comments irrelevant to the topic.  When you file the new bug, please include the output from about:net-internals (follow the instructions there to generate a dump to include in your bug report).
It seems there is no way to control the number of outgoing connections. As people have mentioned, it is perfectly possible to have an ISP or a corporate firewall which limits the number of outgoing connections per PC. The result is that 80% of the time, Chrome responds with "Access Denied" and also causes problems for all other network-based software on that PC.

Set whatever default you want, but please allow the speed-freaks and those limited to a small number of connections by 3rd parties to override it up *or* down. Saying that an ISP or corporate network manager *shouldn't* limit things doesn't really help.

The direct parallel, as mentioned above, are the FireFox settings "network.http.max-connections" and "network.http.max-connections-per-server".
Comment 39 by ithk...@gmail.com, Jun 23 2011
We are building an intranet application that uses multiple simultaneous tabs inside of iframes with comet communication with the server.  Because of this issue and issues with printing table headers etc., I may have to recommend that we move away from the Chrome/WebKit platform and recommend that all of our users install Firefox.
Labels: Restrict-AddIssueComment-Commit
I'm locking down comments since it does not appear anyone is disagreeing with our existing defaults, but rather want configurability.

To continue the conversation elsewhere, please refer to:
 Issue 87381  - Allow configuring of max connections and max connections per proxy
 Issue 85323  - Configurable connections-per-host (I have WontFix'd this one, but it's open for discussion still)
 Issue 63658  - Policy: Add policy to manage max number of concurrent connections per proxy server
Project Member Comment 41 by bugdroid1@chromium.org, Mar 10 2013
Blocking: -chromium:19177 chromium:19177
Labels: -Area-Internals -Internals-Network Cr-Internals Cr-Internals-Network
Sign in to add a comment