New issue
Advanced search Search tips

Issue 86175 link

Starred by 40 users

Issue metadata

Status: Fixed
Closed: Jul 2013
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug

Sign in to add a comment

Prerendering does not have any distinguishing HTTP headers

Reported by, Jun 15 2011

Issue description

Chrome Version       : Google Chrome 13.0.782.20 (Official Build 88803) dev-m
URLs (if applicable) :
Other browsers tested:
Add OK or FAIL after other browsers where you have tested this issue:
     Safari 5: n/a
  Firefox 4.x: OK (Uses X-moz: prefetch)
       IE 7/8/9: n/a

What steps will reproduce the problem?
1. Open a program to record HTTP Traffic
2. Navigate to
3. Wait until prerendering enabled message is shown

What is the expected result?
In the HTTP Traffic, the prefetch/prerendering request has a custom HTTP header to tell the server that the page is being prefetched or prerendered rather than the user is navigating to it.

What happens instead?
No HTTP header is sent, and prerender requests are identical to navigation requests.

Please provide any additional information below. Attach a screenshot if
A HTTP Header is needed to allow servers to distinguish between the two types of request. As prerendering has no cross-origin restrictions, servers cannot rely on a query string to detect prefetch requests and servers can't use the Page Visibility API as it is only client side. 

Furthermore, there needs to be a custom HTTP header / status code that a server can send back to deny prerendering access for sites where prerendering should not occur, and instruct Chrome to reload the page on navigation rather than showing a prerendered error message.
Labels: -Area-Undefined Area-Internals Feature-Preload

Comment 2 by, Jun 23 2011

Labels: -Pri-2 Pri-1
Status: Available
I thought we were sending a Purpose header for this? When did that get dropped? This definitely seems undesirable that a site can't deny speculative requests if it has limited capacity...
Labels: -Pri-1 Pri-2
At this point we're still gathering developer feedback about prerendering (including this need) and haven't made a decision.
This header is definitively needed. 

Better than that, sites should also have the ability to "reply" to prerendering requests (marked with such a special header) with another special HTTP response header, that disables all consecutive prerendering requests to the site for a specified amount of time. This would enable websites to signal to the browser that they generally do not want any prerender traffic and that the browser should not bother sending any prerendering requests to them, because it will only waste the sites' and the user's bandwidth, and will be refused anyway by the server.

Of course the amount of time the browser honors this reply should be limited to probably a few hours or days, and the site should be able to override this preference (ie. the amount of time it does not wish to receive any prerender request) also from normal (not prerender) request, by returning the appropriate extra header when serving a page to the user actually visiting the site.

Comment 5 by, Jan 29 2013

The header is a must.

Example: A campaign site may ask for a unique code on the frontpage to identify the user or a ticket code but at the same time reset any running sessions to avoid overlapping session variables.
In this example, prerendering will break any attempt from the user from entering another base address (like going from /mydashboard/ to /myuserinfo/), since typeing / and pausing for a second and then continuing will re-set their session and subsequent typing will always result in "access denied" since they are no longer logged in.
Without a prefetch header to identify actual pages, no site can really know if the user actually requested the page and act accordingly.

Please add the header as soon as possible. I have a site as explained that currently breaks in Chrome 24 if users try to alter the URL.
#5: Page Visibility API is the suggested approach to discovering this:

Or do you need something at the HTTP layer instead.

Comment 7 Deleted

Comment 8 by, Jan 29 2013

I need it at the HTTP layer. The session-reset happens on the server when URL is requested through PHP.
A JS API is nice, but it does not solve this issue.
GETs are not supposed to change the state of the server in a user-relevant way ( I would imagine that this application design would break in other cases as well (e.g. some forms of caching, some proxies, web crawlers, etc).

Comment 10 by, Jan 29 2013

I am well aware of how GET/POST are supposed to be used. The functionality would not break for proxies or crawlers as there is nothing to break (they cannot log in).
However it's not safe to rely on that all sites have no side effects on GET requests for a browser. There is a difference from a /user/ making a request (because he specifically wants that page) and the browser making the request behind the scenes. User intent and sideeffects naturally go hand-in-hand.

I've tried having prerendering enabled in older versions of Chrome for all URLs but I eventually disabled it, as it caused massive issues with various sites.
I like the idea of prerendering, but doing it silently, as if the user intended to visit the page is too aggressive. Servers should have chance to disable or detect it, so they can act accordingly.
Thanks for your feedback on Chrome Prerendering.

I don't fully understand the scenario you are describing in your post #5. It sounds like the "reset" when visiting "/" is done to reset session state from a content perspective, and not from a security perspective (i.e. not log out the user and forget session state). Assuming this is your motivation, rather than erasing the previous session information, you could store it in some backup session cookie (and/or corresponding server side "backup" session information). If you then detect, in JS, that a prerender is happening, you could revert the backup cookie to the session cookie, and/or, issue an Ajax call to perform the same operation on the server.

Please let me know if this works (or if I am misunderstanding how exactly your system operates).


Comment 12 by, Jan 29 2013

What you are describing is more like a hack/workaround due to the inability to detect a prerender beforehand. I have considered it, but I believe Chrome should supply this information in due time, instead of developers having to hack around it on the client-side.
It's really a simple matter of detecting what is /about/ to happen, instead of trying to fix what has already happened.
Thanks for your feedback.

I think the specific website behavior that you are describing is pretty rare: since we launched prerendering 1.5 years ago, this is the first time that I have heard of such a scenario in the context of prerendering.

We appreciate your suggestion though, and I agree the API you are proposing would have its benefits.

Given that there is a workaround and given how rare this scenario is, this is not a high-priority work item for us at this time, but we will certainly consider it for the medium-term.


Comment 14 by, Jan 29 2013

I will agree with others on this issue that there are serverside performance and server logs to consider as well. I can only speak from the issues I have encountered as both a user and developer.
The issues of the prefetching behaviour are obvious as well as the benefits are, but they could at least be worked around in a sane manner using general prefetching rules or a header as adaman originally suggested. The current implementation does not allow for it at all which is odd for such an agressive behaviour. A JS API is still nice but it emphasizes the point of having to know about this behaviour which is not solved server-side.

Thank you for responding so quickly. I hope you'll reconsider the current implementation versus Geckos.

Comment 15 Deleted

I'm curious why there is so much pushback regarding just adding a header specifying that the request is executed via prerender. Firefox and Safari both provide this information, and it seems that this would be a trivial feature addition to Chrome as well.

That said, this is a headache for anyone developing custom tracking software because it leaves you no choice but to adopt the Page Visibility API and precludes any filtering at the server level.

There are numerous use cases where a website may want to filter prerendered requests at the web server level or flat-out deny them. Servers with limited resources have already been mentioned, but I repeat it here because it's a good example. If I'm limited on bandwidth, I don't want to serve content to clients that may never look at it.

It seems that the current policy of refusing to provide a header is a strategy for forcing this feature upon the developer community. Please give us the flexibility to decide for ourselves how to handle these requests.
Travis, thanks for your feedback.

However, I can't follow your arguments.
Re correct tracking: If the goal is accurate tracking, simply excluding hits based on a header would not do the job -- you would still need javascript and the visibility API to detect when the page is shown, and report that back via Ajax. So since a visibility API is needed in any case, why not just have a visibility API, since that is also sufficient.

Re denying prerendering in high load cases: When people make this argument, it is usually a very theoretical one, and has no relation to any real practical situations.  Prerendering constitutes a tiny amount of HTTP traffic, and given growth of Internet traffic, sites should plan ahead when they provision for load. Even if there are some extreme one time flash crowd situations, users would still be impacted, prerendering would just be a drop in the bucket. Therefore, this would not really justify introducing additional complexity in the API, in my opinion. If you have *actual* real world examples of where prerendering causes any issues, we'd love to hear those, and we would gladly reconsider our view once we see specifics of such cases.

Re other cases of opt-out: Can you give good examples other than bandwidth where someone would like to opt out of prerendering? Earlier in this bug, someone mentions a very contrived situation, which may still occur. I mentioned a workaround that can be used in such situations. So in that sense, there already is a way to opt-out (even though it is only a workaround), if this breaks your site.


Comment 18 by Deleted ...@, Feb 21 2013


Given all the talking points that have already been covered in this thread, I will admit that adding the header is more of a matter of convenience than necessity. That said, it seems to be such a trivial additional that I really don't understand why there is so much reluctance to add it.

Referencing the Wikipedia article on HTTP that was previously mentioned:

Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving of banner advertisements or incrementing a web counter.

So yes, from an academic perspective, a GET request shouldn't cause any state changes; however, the article itself mentions that there are edge cases where state changes do occur.

There are a tons of websites running all kinds of legacy analytics tools that will suffer from this behavior, and there are many instances where an administrator could add a condition to check for the header in a matter of minutes, but it might require substantially more effort to integrate the visibility API.

Again, this is convenience, not necessity. My organization has already made the decision to integrate the visibility API as it makes sense for our software package, but I'm certain there are many situations where people would prefer to have options.

I could provide a bunch of contrived scenarios for opt-out, but all that seems to be missing the point that this is convenient for website administrators, and it seems at least on the surface to be an extremely simple addition to the browser, so why not just add it?
TinyMountain, you just answered your own question: "I could provide a bunch of CONTRIVED scenarios". The reason why you have to say "contrived" is exactly why we are hesitant to do it at this point. One can come up with a ton of theoretical scenarios where it might make sense - sure. But to date, we haven't seen any actual use cases that were significant enough to really warrant adding a header.

As for analytics, just not counting it based on the header doesn't solve the problem one bit, because rather than overreporting, it would now underreport (because views will not be counted).

So while we will keep this bug open and definitely revisit it over time (and re-evaluate), at this time, we do not see any actual reason why this should be implemented.

TinyMountain, sorry, my answer did not address all your points, I just re-read your original post.

Let me respond to the side effect: I agree with everything you said, however, side effects are still not relevant, for the following reason:
Pages prerendered from inside the domain itself, the webmaster can simply not issue prerender instructions for unsafe URLs.
As for prerenders initiated from outside the domain, such as Google Web Search or Omnibox or Third Party sites: URLs prerendered that way are typically not URLs with side effects, but many times the root page of a domain. You might now start describing malicious attack scenarios where someone tries to make use of someone logged on to an account and, from a third party site, perform malicious actions using prerendering, and to protect against these. Notice that such an attacker could accomplish the same thing without prerendering using iframes, so protecting prerendering against such malicious attacks is not relevant.

Does this make sense? If you can think of scenarios I haven't addressed, please do let me know!

Again - we do not want to not add a header at all cost, but we want to be convinced that there are realistic scenarios causing problems that would be fixed by such a header. To date, we haven't seen any actual use case for it.

I understand both sides of the argument here - headers may be useless for real-world scenareos and just increase page-load times on something supposed to speed up load times. On the other hand, Chrome is putting server administrators on the backfoot, making them do any detection client-side, after already (wasting?) the bandwidth/processing on it. 

So while I was thinking of a real-world scenareo, I got to this question, although unrelated to the problem at hand but related to prerendering from third-party sites in general. With prerendering, for example from Google Web Search, does Chrome send the Referer header to the page being prerendered? While this is nothing new for any link being clicked, it does raise the question that any search query that is performed on Google that results in a prerender would get sent to the first result (prerendered page) which is typically a third-party, without the user even clicking on the link. Again I know there's some Referer magic for where https is involved and also Google uses multiple redirects possibly to stop this, but isn't this a theoretical privacy problem with prerendering third-party pages with full headers?

Project Member

Comment 22 by, Mar 10 2013

Labels: -Area-Internals -Feature-Preload Cr-Internals Cr-Conent-Preload

Comment 23 by, Mar 20 2013

Labels: -Cr-Conent-Preload Cr-Content-Preload
Project Member

Comment 24 by, Apr 6 2013

Labels: Cr-Blink
Project Member

Comment 25 by, Apr 6 2013

Labels: -Cr-Content-Preload Cr-Internals-Preload
I've recently come across a real-world situation where a header is needed.

I run a website where users must queue to reach a shared resource.  To prevent users from queuing, leaving their desk, moving to the resource, then tying up the resource with inactivity, we employ a 30 second grace period where a user must click "Start" before accessing the resource.  If they fail to click "Start" we move to the next user.

Currently, without any prerendering hints, Chrome is detecting this "Start" button and prerendering the page allowing the inactive users access to the resource.

Since all the queue control happens on the backend, the JS API is useless where as a header could be reviewed before rendering the page.

Please consider this evidence for a header update.

Comment 27 by, Jul 22 2013

That's a good illustration of how people would use such a header. 

The thing is, GET is safe, meaning that the server has to accept that it doesn't have any such side effects -- this allows things like search engine spiders, caches, retrying clients, etc. without breaking the Web.

By associating side effects with the GET (reserving resources for the user), you're breaking this contract that's written into HTTP. Adding a header to distinguish pre-render is just a band-aid being slapped upon this problem; many other things are going to break your server, because it's a fundamentally bad design, both from a Web standpoint, as well as from a more general distributed systems view.
Status: WontFix
I'm marking this as WontFix.

The engineering work required to support this is fairly minimal, as others have discussed. However, we do not have plans to support an additional prerender header and pref the visibility API.

At this point with IE11 supporting prerender we would also want any additional web-facing changes (such as a header) to go through a standards body process - likely the W3C Web Performance Group - before being implemented widely in Chrome.
 Issue 264884  has been merged into this issue.
How do you use the Page Visibility API when the resource being prerendered isn't an HTML/XHTML document, and you don't control the site linking to the resource?

Comment 32 by, Jul 30 2013

Anthon, you can't.
Is there any chance to make it so that if user presses enter immediately after pasting link it would wait the prerender (if doing it) to finish and use that result? Instead of triggering second request when pressing enter immediately?

Comment 34 by, Oct 18 2013

I don't understand why Chromium developers would think that there is no realistic scenario for adding a header.

I'm developing a CMS extension which learns the most likely "next" page and adds a header to prefetch it. It's based on statistics, or - like Google likes to call it - "machine learning".

Gathering statistics could be a trivial matter - if a prefetch header is present, include JS to bump the counter using Visibility API and XHR (or disregard the request entirely, for simplicity). Otherwise just bump the counter server-side.

Without the header, it's complicated. I can't just count all events server-side. I'm doomed to always use AJAX for my tracking. Therefore, I have to implement some kind of AJAX CSRF to protect malicious users from messing with my stats. A trivial task just turned into a nightmare.

So, effectively, I just implemented 2 mutually exclusive modes: learning and prefetching. I don't want to add unnecessary complexity, add extra JS code which increases loading time, just because you thought that it's not a realistic use case.
I just realized that even the sites that I have in my bookmarks bar get prerendered in Chrome from time to time without me actually loading them.

You're really hurting server-side tracking with this. I'm the author of, and even Google Analytics' own Measurement Protocol API can't be used reliably to track actual user requests anymore because of this.
Attached are two files that can replicate a scenario that's kicking my arse with regards to prefetching. test.php sets up a session and sticks some variables in it. Test two grabs that session and adds to it. Because it's just done a privilege escalation it regenerates the session id (as per current OWASP guidelines)

The prerendering causes a new session id to be generated. Which appears to then be promptly ignored by chrome when the user hits enter and it sends the old session id - they then get an entirely new session without any variables set because php rejects it. In this case you get a broken message from test2.php.

I appreciate this will never be 'fixed' since apparently i shouldn't be changing a users state like this. But seeing as I can't do a POST from a link on a page my workaround is likely to be complex.
920 bytes View Download
471 bytes View Download
Could you please file a separate bug and attach a net-internals dump? I'm having a hard time understanding the situation from just the PHP files. (I wonder if it's a side effect of the recent cookie logic.)

(Adding an HTTP is not likely to behave well anyway; Chrome does not re-fetch anything when a prerender is swapped in. So if the server doesn't set a cookie based on a prerender header, it will misbehave in the other direction.)

Comment 39 Deleted

Has this been added yet? If not, it really should be.

However, as previously mentioned; If Chrome doesn't re-fetch anything when a prerender is swapped in, then a header wouldn't be of much use UNLESS there was a way to communicate back to Chrome.

For instance, if I detected a pre-render header and I returned "prerender: no" in the page itself, could Chrome not detect this and then re-fetch data on a non-prerendered load? Not too sure if that's an overly complicated solution for what could be a simple workaround though.
Just add this "X-Purpose" header back, FFS!!!

Comment 42 by Deleted ...@, Mar 18 2015

Please add the "X-Purpose" header finally
Must either behave as a regular request (i.e. must apply Set-Cookie headers) or as a non-regular request (send specific X-Purpose header), but this cannot remain in between ! 

Comment 44 by, Mar 18 2015

Oh, my.

You cannot make preload requests at arbitrary times without indicating it to the browser. This completely ruins the determinism of a user's interaction with a dynamic web service, with no workaround. It doesn't matter that GET requests don't change any state on the server: they _can_ result in, say, error messages being logged for requests to bad URLs, and they _will_ result in database queries, and potentially other operations that do not mutate anything but take time and resources to run. For a browser to do that when nobody's asked it to and then refuse to even acknowledge this fact is simply ludicrous. The page visibility API is completely irrelevant so how you can "prefer" that as a solution despite the actual solution being (in your own words) of "fairly minimal" in terms of engineering work.

Please add that header back in. I want to be able to return an HTTP denial-of-service to such requests and I want Chrome to then stop making them.
The reluctance to do this is just insane. Why not have both APIs? 
The header solution won't work. When a prerender is swapped in, no additional HTTP requests are made. The invisible tab is just made visible. If the server, in response to the header, decides to not send update some server-side state, that state will not be updated when the user actually swaps in the prerender. Instead of counting the visit twice, you'll be counting it zero times.

(Changing the response to somehow disable the prerender also won't work as things are today. In some cases, the prerender may be swapped in before that response is processed. The user will see a normal load with the broken response.)

Comment 47 by Deleted ...@, Apr 9 2015

503 server response should disable prerender (IMHO), makes anti-DOS rate limiting very hard to implement otherwise.  Don't want to 503 a real user page load, but can't tell the difference between prerenders, stuff the user actually clicks on, and a DOS attack.
Please re-instate the "X-Purpose" header. I need to be able to distinguish between requests made by my human users, and requests made by the machines which supposed work for those humans.  
Im struggling to figure out how to fix this. We're running into issues with users manually changing the URL (which is fine) but if they type `/l` and Chrome autocompletes it to `/logout` the prefetch GET /logout actually logs the user out because to our server it looks like a normal GET logout request. 

Comment 50 by, Apr 22 2015

@ #49 This _is_ a bug but your example is a poor one. Logout should *definitely* not be a GET action!! Make it POST-only.

I am linking another issue that I have raised because of Chrome's prefetching.

Steps to reproduce the problem:
1. Enter a URL which responds with a updated cookie and allow it to prefetch. Wait for few seconds
2. Scroll down to other urls shown in the auto fill
3. Scroll back to the actual URl and hit enter

What is the expected behavior?
The expected behavior at Step 3 is to hit the URL with the updated cookie from Step 1. But instead it hits the server with the same old cookie. If this is not possible atleast the 2nd request should not be triggered

What went wrong?
The second request does not go with the updated cookie

Our scenario where prefetching causes problems is this: At, once you logged in, you are roaming as a virtual person in an MMO world. As such person, you can't join in two windows -- like two different locations, which are URL based -- at the same time, so we pause the old session when that happens to prioritize the new. We are not 100% sure if this is what happens, but it looks like Chrome preview thumbs, bookmarks and such may do prefetching, which means the person's paused in their currently open session...

Hope such header gets added!
I can't believe the arrogance of some Chromium developers. We have a developer clearly stating that it's not a lot of work yet for some reason they refuse to do anything despite so many people complaining about it. How difficult is it to add one damn header and make our lives easier?! I'd rather have a header to ignore than have nothing to tell whether it's a prerender request or not.

Why not send a HEAD request to the URL that you want to prefetch so that we can reply to it saying "Don't prefetch me"? I'm sure you've got some clever people amongst you, come up with a solution that works for everyone and don't just close tickets as "won't fix, can't be bothered to".

I have the same problem as a couple people on this thread with trying to track visits. Having placed a breakpoint in my code I'm getting a "hit" simply while typing the URL I want to visit. Best thing is - sometimes I don't even want to visit it because if I've got two links e.g. /aaaa and /aabb and I start typing "aa" it prerenders the "aaaa" when I actually want to type "aabb".

Looks like I'll have to use the visibility API and hope that people don't disable JS in their browsers like they used to...

Comment 54 by Deleted ...@, Aug 27 2015

Yes, to quote a recent post I didn't write:  "one damn header".  And clearly a mountain of something else that has to be overcome.  What's the word?  ahh yes I've heard that one.  I've been hearing it a lot lately, and that is sad.

Look, the example given in #49 appears quite COMMON and quite reasonable justification, AS IF extreme justification were needed for the simple act of telling people what you're doing.  #53 also highlights the silliness of the spurious requests that are coming in.  That is a case that is similar to what brought me here.  I was not able to understand what was happening until I saw things hitting my log as I was trying to type the target URL.

I understand the response in #50, but it does not fit every real-world case.  Just one example, MANY major websites (many google ones included I believe) use an "email verification" response URL that activates an account after starting it or after some change.  It is just one example of a real world need to have a GET (a simple URL) actually DO SOMETHING.  Not something major, not part of the general design of an entire site, but something.  It is QUITE clear that there are many real world and reasonable use cases for sending an informational header along with your request.  It's like arguing that you don't need to send the User Agent.  In a perfect world the web should work for any device and which device it is should not matter, so we shouldn't send it.  We don't care if people might want to log it, that's not a real reason.  Oh wait, it isn't a perfect world.  And even if it was, maybe we care simply because we care.  Maybe we just want to know, not that it affects anything, we just want to know.

I read most of this thread, and every other comment about being able to track REAL requests, about server load issues, etc are all completely valid.  If you are only thinking about prefetch being "we're just moving faster" and not thinking of it as "we're doing stuff that may not actually be real" I can see a justified resistance.  But that is not reality.  It is pretty clear that all of these arguments are valid. Perhaps you should take off the rose glasses and read again.

The comment that you are normally just going to swap in the prefetched page without another request makes sense, why fetch the content again right?  And I get how that could make usage stats a bit hard to interpret (was prefetched data actually used at the client? We don't know!).  But it is still another data point and it could be useful, you could correlate it with other data and probably filter out most of it into a separate data bucket.  Those points make sense.  But the argument that this means we're stuck and the only thing the server can do is send valid content for a prefetch request is bogus.  It makes every bit of sense for the server to send a response code which means "don't prefetch this".  In that case, very little data would be sent, and you'd know to send a REAL request once the user has confirmed it.  Response codes of 204, 205, or 206 could all work IMO, but someone else would be better suited to pick one.  There are plenty of reasonable choices available.  You could also even require both a response code and a "reason" header that would specify "prefetch unavailable" so you were sure that the server actually just denied the prefetch.  With that response you would know FOR SURE to fetch the content (without the prefetch header) before swapping context.

Prefetching seems to work quite well and probably enhances user experience for many cases.  But not in all cases.  In order for the idea to reach full potential you need to consider doing a very small thing such as adding a header and checking the response to the preload request without a wad of javascript that is definitely NOT a general standard and is very limited in applicability (and is not to mention TOTAL OVERKILL for this simple of an issue).  At present the idea is 50% brilliant and 50% unfinished.

When it comes down to it, look at the the real world way people would use the header:

Most sites would be perfectly fine with chrome prefetching 99.99% of their content.  Very few people are going to BOTHER with looking at the header or denying any requests.  You could argue that means it isn't worth doing, but that's just the majority. There are a LOT of minority cases, and minorities matter too, especially when 2% is in the millions.  I don't think it would hurt your agenda one bit to just include the header.  And the chrome team has probably already spent more time reading posts on this bug report than they would have ever spent just adding the header.  Why waste your life like that?  The occasional website problem could be EASILY solved with a little bit of information, if you could possibly recognize that.  Just be honest and forthright with information about what you're doing and leave the rest to others.  So basically, all you'd be doing is BEING NICE and it wouldn't hurt a thing and it would help a lot of people.  Helping people, good?

The comment about the standards body (W3C or whatever) is fine.  But apparently most everyone else is already sending the header for this case.  Fall in line with the group, and yes let's see if it could be included in a spec so nobody is confused about it anymore.  It is NEEDED.
We are getting a large number of requests from one user with a gmail account to our email unsubscribe confirmation page. The url to the unsubscribe confirmation page is provided in email notifications to all our users, so for a gmail user the link would render on-page in their browser every time they get read an email from our site.

This adds a large amount of unexpected cruft to our logs, and since there's no way to determine server-side whether the request was intended by a human, we can't filter out prefetch requests from the logs.

Even if we implement a solution using the Visibility API, that doesn't help with users who turn off javascript, and it doesn't help with users who leave the page before javascript gets executed. Relying entirely on a client-side solution to determine whether a request is human-originated or a prefetch/prerender is absurd.

If you won't add a header to prefetch requests, then what about this alternate solution:

1. Whenever the client first considers prefetching a url on a given domain, perform a request to the server at a predictable url asking if it handles prefetch.
    a. If the server gives no recognizable answer, the server doesn't have any opinion (or can't express it properly), so assume what you will.
    b. If the server responds saying "yes, you can prefetch anything safely", great! Prefetch away.
    c. If the server responds saying "yes, you can prefetch these resources safely", great! Prefetch those resources, don't prefetch anything else. (wildcards or regex should work here)
    d. If the server responds saying "yes, you can prefetch anything *except* these resources", great! Prefetch anything you want other than those items. (probably c. and d. should be able to be combined)
    e. If the server responds saying "no", great! Don't prefetch anything from that domain.
3. For every possible prefetch request to that domain (including the first possible prefetch), refer back to the response the server gave.
4. Periodically re-check with the server as often or as rarely as you like to make sure the domain's prefetch policy is up to date.

This would take more to implement in Chromium's code, but gives webmasters the means to directly address prefetching and preloading. It would solve the problems so many of us are experiencing that prefetching causes, while allowing everyone not harmed by it to continue without changing anything.

The problem I see with waiting for a standards body to address and make a decision on prefetching standards is that Chromium implements prefetching RIGHT NOW when the standards are undefined and won't be explicitly defined until a year or more in the future. You have real developers in front of you right here saying "This feature is causing us real, non-trivial problems, can you please fix it?" The best you've offered - the visibility API - is only a workaround and it relies on executing code in an environment entirely outside our control.
I took a look at W3C's documentation on prefetch (and related), at

Section 1 says:

>    The decision to initiate one or more of the above optimizations is 
>    typically based on heuristic rules based on document markup and 
>    structure, navigation history, and context of the user - e.g., type 
>    of device, available compute and memory resources, network connectivity, 
>    user preferences, and so on. These techniques have proven to be 
>    successful, but can be further improved by leveraging the knowledge a 
>    developer has about the front-end and back-end generation and delivery 
>    of the resources of a web application.

From this, there is precedent for a user-agent (read: Chromium) deciding what and when to preload resources, and specifies document markup and structure as the first indicators of what to preload.

Section 3.2 goes on to specify when a user-agent should make a request provided by a resource hint:

>    The appropriate times to connect to a host, or obtain the
>           specified resource are:
>        When the user agent that supports [RFC5988] processes resource
>            hint link specified via the Link HTTP header.
>        When the resource hint link's link element is inserted into a
>            document.
>        When the resource hint link is created on a link element that is
>            already in a Document.
>        When the href attribute of the link element of an resource hint
>            link that is already in a Document is changed.

Clearly a site should be able to provide hints to the user-agent about what it can and should preload.

Unfortunately, there's no specification on a means for a site to tell the user-agent what *not* to preload. There's also no mechanism to provide a user-agent with resource hints when coming from another domain until the first response is sent. And lastly there's nothing in the specification one way or the other about the user-agent indicating (eg by a request header) that a request is being made as part of a preload optimization strategy.

In other words, there's no official standard either for or against Chromium's current prefetch behavior.

Prefetching, by design, is to quickly load data from a local *cache*. This isn't always optimal and must be allowed to be disabled. For example, with stock quotes, displayed statistics, even message lists (hangouts, email, IM) a cached view is often archaic before it's used.

In these situations, and more, we need the ability to tell prefetching browsers that we don't want to allow prefetching. Whether this is accomplished by a new verb for resource hints, a controlled declaration via server HTTP response code (303 seems appropriate) when it receives notification (by header request) that the request is a prefetch, or by some other means.

Comment 58 Deleted

It absolutely *does* have to do with local caching. It fetches the content from the server before it's requested and CACHES it locally on the client (often rendering it as well) so it can load faster. This is the very definition of a cache.

Comment 60 Deleted

I'm running into this issue as well.
I'm new to web development and as a result have tied a lot of things to GET requests which, from previous comments and the referenced links, appears to be a terrible idea.

However, please continue to read and consider this:

window.addEventListener('DOMContentLoaded', function() {
  socket = io();
  socket.emit('new-client', {
    roomName : getRoomName(),

window.onbeforeunload = function() {
  socket.emit('client-exit', {
    roomName : getRoomName(),

The problem I'm running into is, that event 'DOMContentLoaded' is firing when the user begins to type my website's URL into the address bar, but onbeforeunload is never fired when the user navigates elsewhere or closes the tab.

My question is, agreed, there is no standard for or against Chromium's implementation, but why does it trigger onload/DOMContentLoaded, but not onbeforeunload? Surely everything that is loaded must be unloaded on exit..

Comment 62 by, Aug 16 2016

re #61. Issue 304932 addresses prerendering not always handling beforeunload. Nobody is currently working on it. Feel free to star it.

Comment 63 Deleted

I know this old, and probably won't get fixed now, but really.  I have not seen any good arguments presented for *not* adding the header.  The only arguments present at all are "It shouldn't theoretically be needed" (which I think is adequately answered by the User-Agent analogy) and #46.  

I don't see the logic in #46 holding up.  What it's saying is that it will then be possible for people to break stuff by only returning parts of the page on pre-fetch.  Yes, so what - there's a lot of ways to break stuff if you really want to.  Point is most people won't pay any attention to the header and those that do are doing so deliberately and must think it through.  That's true of any development we do!

GET not changing anything is do-able for an API, less so for pages in a web-app in practice.  The logout example is a good one.  
Status: Fixed (was: WontFix)
I have a tiny bit of good news.

As of M69 (in Stable now) we send Purpose:prefetch headers with all nostate-prefetch requests, just as we do with <link rel=prefetch>. (Nostate-prefetch is what replaced Prerender as a more lightweight alternative that's easier to support in front of growing web platform capabilities).

More details:

Sign in to add a comment