New issue
Advanced search Search tips

Issue 103875 link

Starred by 22 users

Issue metadata

Status: Untriaged
Owner: ----
EstimatedDays: ----
NextAction: ----
OS: Mac
Pri: 2
Type: Bug

Sign in to add a comment

Need to test that caching is disabled with certificate errors.

Reported by, Nov 11 2011

Issue description

Chrome Version       : 15.0.874.106
OS Version: OS X 10.7.2
URLs (if applicable) :
Other browsers tested:
  Add OK or FAIL after other browsers where you have tested this issue:
     Safari 5:
  Firefox 4.x:
     IE 7/8/9:

What steps will reproduce the problem?
1. Start-up a web-server with a self-signed certificate and enable request logging
2. Load a page or image from said web-server
3. Refresh the page

What is the expected result?
Given proper response header configuration, the second page-load should not cause content to be re-downloaded.

What happens instead?
With self-signed SSL certificates, Chrome completely ignores all caching directives, and always reloads all content.

UserAgentString: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.106 Safari/535.2

Other users have noted this issue some time ago as captured in this (closed-for-comments-but-unanswered) thread:


Comment 1 by, Nov 11 2011

I'm seeing this bug too.  It appears that some change in chrome 15 changed how this caching work, because using the same headers in previous versions of chrome cached fine.

Here are the response headers of the response that doesn't seem to be cached (subsequent requests don't send the If-Modified-Since header)...

HTTP/1.1 200 OK
Content-Type: application/x-shockwave-flash
Content-Length: 473584
Cache-Control: must-revalidate, public, max-age=0
Last-Modified: Wed, 09 Nov 2011 02:32:53 GMT
Server: Jetty(6.1.x)

Labels: -Area-Undefined Area-Internals Internals-Network-Cache Internals-Network-SSL

Comment 3 by, Nov 14 2011

Thank you for the bug report.  Chrome does not cache a resource
if it has an SSL certificate error.  A self-signed SSL certificate
should cause a "certificate is not trusted" error, so it is
intentional for Chrome to not cache resources that use a self-signed

Comment 4 by, Nov 14 2011

Is this a new feature?  Is there a way for to get it to cache anyway?

Isn't the point of a self-signed certificate to allow the user to be the one who decides whether or not it is trusted?  Why not cache it?  If the user went there before and accepted the certificate the first time, why not cache it for the second time around.   Does not caching it solve some other problem?

Unfortunately, that answer is insufficient for companies that run private internal servers.  Many companies (including ours) ship application servers that customers choose to run over SSL.  VMWare is an example of one such company.  Because these servers are internal, and numerous, it is unrealistic for customers to pay for CA signed certificates.  On top of that, it is actually IMPOSSIBLE to get signed certificates for private IP infrastructure hosts (192.168, 172.16, 10.*).

If a company installs self-signed SSL certificates on their servers, and the end-user (also internal customers) click the "Proceed anyway" button, they are accepting the certificate and there is no reason not to cache the response.  This is the way every other browser works.  You did your duty, warned the user, they accepted the "risk" and proceeded with the connection.  There is no reason not to cache responses.

In our case, we have a 1.6MB Flash-based application (12MB+ if the customer is Japanese and chooses to use embedded fonts).  In previous releases of Chromium, and every other browser we've tested, once the user accepts the self-signed certificate, the application is cached.  Subsequent loads take less than one second.  Starting with Chromium 15, every time the user accesses the application -- because Chrome is no longer caching -- they wait anywhere from 10 to 30 seconds depending on their connection speed.

Comment 6 by, Nov 14 2011

Status: Available
Summary: Need to test that caching is disabled with certificate errors.
It's concerning that we may have been caching resources from sites with certificate errors up until M15, and we didn't notice. We should add a test for this.
Funny.  I'd still like an explanation of why self-signed SSL content, once accepted by the user, shouldn't be cached, and why every other browser has "got it wrong".

Chrome is basically penalizing every internally run SSL server.

Let me be clear, we like Chrome.  As developers it became our browser of choice.  We'd also like to support it for our customers.  But as developers, 30 second load times has killed our round trip time, sending us back to Firefox.  As a commercial software provider, we won't tolerate customer complaints about load times, so we will be forced to drop support for Chrome.

Comment 8 by, Nov 15 2011

I apologise if my #6 appeared flippant, it wasn't supposed to be. It's just the result of hammering through bug triage in the morning.

We don't want to cache resources with certificate errors because it violates reasonable expectations of what bypassing a certificate error means. We know that our users are subject to attacks where sites are MITMed with a bogus certificate and we know, the vast majority of the time, users will click through a certificate warning.

We can hope that the big red interstitial might cause them to pause before, say, logging in or entering any personal information on what would otherwise be a trusted site. In fact, it would be understandable if users bypassed certificate warnings on their bank's site if they were only using the ATM locator or such.

However, when combined with caching, bypassing the certificate warning compromises every connection to that site until the disk cache is flushed.  The user might believe that they were savvy and didn't login because of the warning, but the MITM site may have caused malicious Javascript to be cached which will compromise them forever more.

I understand that getting certificates for internal sites can be pain. We don't suppose trust-on-first-use for HTTPS because we don't believe that asking the user to evaluate certificate trust questions is reasonable (we are forced to allow interstitial bypass as a matter of legacy.) However, installing a custom root CA (preferably with name constraints) certainly works when you control the clients. As an experiment we also support DNSSEC signed certificates:, although that's rather esoteric.
We already have an unit test to cover this case: HttpCache.SimpleGET_SSLError (it uses net::CERT_STATUS_REVOKED). This hasn't changed in years. 

The relevant code in HttpCache::Transaction::WriteResponseInfoToEntry():
  if ((cache_->mode() != RECORD &&
       response_.headers->HasHeaderValue("cache-control", "no-store")) ||
      net::IsCertStatusError(response_.ssl_info.cert_status)) {
    return OK;

Comment 10 by, Nov 29 2011

Does all this mean that y'all think the fact that it stopped caching for us is unrelated to the certificate?
I don't understand how this conclusion was made by google developers.

1. A user gets to a website (or web application) which has not a valid certificate and has been warned about it.
2. The user accepts that the site with invalid certificate is what he actually wants to access, for whatever reason (there are many).
3. Now the browser decides that his user experience should be crippled by not caching static resources (not even in memory)

Is this a punishment for the users for deciding to use sites with (for example) self-signed certificates?

Are you saying that you need buy a certificate for every internal hostname that provides a secure website in a corporation (for example)?

I would like to know what is the rational for this browser behaviour. 

My company provide such applications which are normally deployed internally. We have now recommend to them not to use Google Chrome v15 and higher for our application, because the difference in performance is significant.
Sorry, I just read comment 8 in detail. It explains the problem of caching potentially dangerous scripts or resources from authentic malicious sites.

This should not prevent Chrome to use at least a memory only cache, that lives while the browser is running and until the "session" expires. 

Currently java script libraries like JQuery, YUI, ExtJS, they just keep growing in size and web application use more and more resources (web fonts, etc). Loading all these libraries each time you navigate to a new page is quite onerous, mainly on slow connections through VPN.

Comment 13 by, Jan 10 2012

For internal applications it'd be way more sensible to create your own CA certificate, deploy it to all internal computers and sign certificates for internal applications using it. It isn't all that hard, doesn't cost anything, avoids browser warnings (+issues like this one) and is also way more secure since you won't be teaching your users to ignore certificate errors.
There are two problems with your suggestions for deployers of self-signed certificates when used with internal servers.

1. SSL certificates are tied to hostname not IP address and require the host server to be in a DNS.  I can't speak for other users, but I know that customers that deploy our application server often (maybe mostly) access it by IP address because they either do not use internal DNS or cannot or will not jump through the hoops in their organization's bureaucracy to put our servers into their DNS.

2. Because the number of end-users is potentially large (and not necessarily known a priori), "deploy it to all internal computers" when there are possibly hundreds of end points is not feasible.

I'd like to propose a compromise.  Can Chrome enable caching if two conditions are met:

1. The user accepts the warning and proceeds anyway, and
2. The certificate in question is issued for a private infrastructure IP (,,

This guarantees that with respect to the internet and obvious concerns about banks, phishing attacks, etc. the behavior remains the same.  But it allows caching with self-signed certificates where they are used most, in private infrastructure environments -- and the user still has to explicitly go through the gate of the warning interstitial.

Point 1 is not correct. Certificates can be associated with IP addresses. Chrome supports the IP appearing in the CN field (same as the hostname), as well as in the iPAddress nametype of the subjectAltName extension. So you can still have a private CA issuing certificates for IPs rather than hostnames.

That said, it seems very uncommon to see organizations require new servers be deployed and accessed only via IP. While I can understand that some products this may make sense, I would think for most customers that the intersection between DNS ops (to provision names), infra (to deploy the boxes) and net ops (to create the CAs/manage the devices) all tend to be the same group.

2) For enterprise customers, this is very common and easily facilitated by existing (built in) systems management tools. Windows out of the box supports it via domain policies, and OS X supports exporting/importing SSL trust policies since 10.6 (although OS X is nowhere near the corporate deploy). Additionally, organizations may also use the mime-type application/x-x509-ca-cert to push out CA root certificates to users, particularly mobile, which offers a significantly better user experience (trust once to trust always)

Speaking as an individual, I'm not a fan of the compromise. As others have mentioned, this weakens some of the security posturing of the browser, and I don't think that's a path we want to go. Given the near zero-cost of entry for certificates (either free via StartSSL and the like or free via enterprise CA), supporting self-signed certs is a trust model to be discouraged, not facilitated.
This is terrible "feature" (bug from my POV) which makes Chrome unusable for my products. There are a dozen of scenarios where self-signed certificate is commonly used. 
Relation (crippling - as was already mentioned in comment 11) between certificates and caching is probably kind of joke. 

BTW, Chrome is only browser with this behaviour, so it probably isn't so big security problem if other 90% browsers cache behaviour is different, don't you think?  
While I agree that for the common user a MITM is a possible issue, but perhaps adding a features such as "enable cache regardless of SSL Certificate Validation" could be exposed in the developer settings, so that web developers developing their site using a self-signed certificate would not waste 3 hours or more finding this article only to come to the conclusion their HTTP_IF_NONE_MATCH code is working fine.

Chrome should be built for both end-users and developers!
Can't you somehow associate the cache to the certificate? When the user goes back to their "real" non-MITMed bank site, it won't try to serve the cached content associated with the bogus cert.

That way, under HTTPS, no one can pollute someone else's cache (which seems to be the purpose of this "feature")
Our application's performance is greatly impacted by this issue. Is there an API we can call to tell if the certificate is not valid? That way we could explicitly warn users about the performance implications of not running with a valid certificate.
mariakaval: No, there is no programmatic exposure of certificate trust information. If you suspect clients will reject the certificate, you may find it easier to install a CA that chains to a well known root. Some CAs, such as StartSSL, offer such certificates for free, and thus may be a viable alternative.
How feasible is it to associate the cache to the cert as mentioned in comment #18? It sounds like it may be a solution to get rig of the side effects caused by the current implementation while still serving the purpose of protecting real content from being polluted by bogus/dangerous content.
philippe.laflamme: This bug is not really meant to address writing additional tests, not so much for requests to a change in functionality.

As mentioned in comment #8, this behaviour is working as intended, so such changes are unlikely. Adam mentioned one possibility (DNSSEC) in comment #8, and startssl represents another possibility. Alternatively, you can install the signing certificate (which may be self-signed) locally via the OS-appropriate mechanism, as a root CA, so that no warnings are generated for the cert.

Comment 23 by, Jun 4 2012

The original summary of this bug was essentially "pages aren't cached". The bug as submitted had nothing to do with tests.  So I would argue this is absolutely the correct place to ask for improvement to the functionality.  Just because a behavior was intentional doesn't mean it can't be improved upon.

Is there a downside to the suggestion from comment #18?

Comment 24 by, Jun 4 2012

It would be nice to have a separate cache for MITM vs "normal" HTTPS. However, it makes a complete mess of the network stack and would slow everything down.

Currently the HTTP cache is layered over the network stack: a cache hit can be satisfied immediately. However, with a split cache we would have to establish an SSL connection in order to decide which "part" of the cache should be used. Therefore a cache hit might result in a full DNS + TCP + SSL delay while we figure out whether it's an acceptable result.

For a corner case like this I'm afraid that the additional complexity and latency means that it's not viable to split the cache.
I don't think that it does, or at least, that depends on how it's implemented. 

If the assumption is that a self-signed certificate is less trusty than a signed certificate, then you can layer your caches in the same way: trust some caches more than others. Any file present in a "signed cache" would be served in priority over a "self-signed cache". This way, bogus certs won't be able to overwrite real certs' cache content.

If a site has a single single cert (signed or not), than caching works as expected, all the time.
If a site has a multiple certs caching still works as expected and self-signed content can't be served "in favour" of content from a signed-cert cache. So you're still "protecting" your users from cache pollution.

It may slow things down a bit since there's some decision process, but a a cache hit can still be served immediately without involving the network stack.
+1 for the combination #18, #23, #25 ... specifically for the idea posed by Philippe Laflamme of a two-tier cache.  The "ordinary" cache (i.e. non-HTTPS and signed-cert. HTTPS), and a second tier for self-signed but otherwise valid HTTPS content.

Comment 27 by, Jun 4 2012

#25: you're assuming that we can simply serve signed content from the cache without checking, but I don't believe that's the case in general for MITM boxes which might be altering content on the fly. (The original motivation for splitting the cache was to give users an indication when they were behind a MITM proxy and to enforce a clean separation between the true site and the MITM version.)

In any case, the complexity of doing it was too much for that case too.
Does this same caching behavior exist in Safari?  I believe I'm seeing this same behavior in Safari 5.x (desktop) as well as Safari on iPad, but am not completely sure.
Thanks, Maria
kenorb: I'm sorry, those are not related issues. It sounds like you meant to file a new bug.
Project Member

Comment 32 by, Mar 10 2013

Labels: -Area-Internals -Internals-Network-Cache -Internals-Network-SSL Cr-Internals-Network-SSL Cr-Internals Cr-Internals-Network-Cache
You can enable caching on sites with certificate errors with a tiny patch, see here:
Sure, but Chromium explicitly does not want to do that, and we'd strongly discourage Chromium-based applications from doing it.

Comment 35 by, Jun 18 2015

I'm not sure why this bug is still active: It works as intended. We filed bugs in IE to make it behave the same way, although I'm not sure of the status of those issues.

For the enterprise/developer, the right way to do this is to do the PKI correctly: You create a self-signed root certificate and deploy that to your clients. You then issue the server certificates off that root certificate and your clients will trust them and cache the content. This is how every organization with a clue about security handles this scenario.

Comment 36 by, Jan 4 2016

It should be noted that resources (JS, CSS, ...) can still be cached in the memory of the rendering process if the Vary HTTP header has not been set in the server reply. In that case (and if other headers do not prevent caching), they will not be downloaded from the server for each request associated with the same tab.

The impact of the Vary HTTP header in that case is due to ResourceFetcher::determineRevalidationPolicy returning Reload if existingResource->hasVaryHeader().

Vary HTTP header is often added for compression (Accept-Encoding), so disabling compression for Chrome user-agents may mitigate this problem if there is a possible issue with the certificate.

Ref:  Issue 573319 
Project Member

Comment 37 by, Jan 3 2017

Labels: Hotlist-Recharge-Cold
Status: Untriaged (was: Available)
This issue has been available for more than 365 days, and should be re-evaluated. Please re-triage this issue.
The Hotlist-Recharge-Cold label is applied for tracking purposes, and should not be removed after re-triaging the issue.

For more details visit - Your friendly Sheriffbot
A issue presents in chrome for more than 7 years. But we still didn't get any fix. Even firefox, IE works fine. 
There are no plans to enable the disk cache for certificate errors.

This bug is about adding tests to make sure we never do.
As mentionedn disk cache would not be enabled for self signed certificate. So we are using memory cache. Will it have any impact?

Is there any workaround to use disk cache when using self signed certificate?

Components: -Internals

Sign in to add a comment