Refreshing a page does not bypass socket pools
Reported by
s...@wpmanagedsecure.com,
Feb 15 2017
|
||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36 Example URL: wpmanagedsecure.com Steps to reproduce the problem: 1. Visit a website hosted on web server A 2. Change local host DNS to point to cloned version on site B 3. Reload page in Chrome and it still pulls from server A 4. Internal-> DNS Clear cache 5. Reload the site, & hard reload the site... Still pulls from Server A What is the expected behavior? I'd expect that an update in the host file would update Chromes DNS Cache, Upon clearing DNS cache Chrome would look for a new IP address What went wrong? Chrome's DNS cache seems broken. It does not bother to check local hosts or even public DNS servers for changes. When clearing DNS cache, seems to hold on to old information instead of retrieving new information. Did this work before? Yes Not sure, but this wasn't a problem about 3-4 updates ago and seemed to get worse with each update. Chrome version: 56.0.2924.87 Channel: stable OS Version: 10.0 Flash Version: Shockwave Flash 24.0 r0 I'm not sure this is related, but I've had issues with page cache not reloading either. I don't have cache avoiding headers when I do WordPress development. There aren't any headers that set cache times either. I'll make changes to HTML or CSS, and there will be times when Chrome doesn't pull from the server. I can even do developer tools clear cache hard reload, and it doesn't seem to pull from the server. It's a similar situation with DNS. I've not only noticed this issue on my computer, but a client called on a Mac. In developer tools, it showed old HTML, yet on every other computer, I used it showed the new links. I tried clearing all cache and reloading, but the old information still showed. I told her to wait and eventually it would correct itself.
,
Feb 16 2017
After seeing this list, it sounds like the open connections might be an issue. I'm not sure yet. But I can tell you that other browsers behave as I expect when making these changes. In Edge and Firefox, I can make a change in the host files, reload in those browsers and the effect is immediate. With chrome, the effect is delayed. Meaning I have to close tabs or wait for 5 to 10 minutes to try again. Or completely close it out. Since these issues started, it has slowed down my work considerably. I will try to clear sockets the next time I have a problem. It never uses to be this way. I could make a change, hit reload and the latest info would be available. DNS or otherwise.
,
Feb 16 2017
In the attached log, what is the hostname that you remapped, and when did you do the remapping?
,
Feb 16 2017
I loaded wpmanagedsecure.com, then visited the about page. I then went to the host file and enabled wpmanagedsecure.com23.254.151.146. I went back to the tab and reloaded the site and clicked on a couple of links.
,
Feb 16 2017
OK so I confirmed that your issue here is the connection pools (http2 session in this case).
(1) The HTTP cache was correctly bypassed by having refreshed the page -- seen here by the "BYPASS_CACHE" load flag:
t=325819 [st= 0] +REQUEST_ALIVE [dt=42]
t=325819 [st= 0] URL_REQUEST_DELEGATE [dt=1]
t=325820 [st= 1] +URL_REQUEST_START_JOB [dt=41]
--> load_flags = 258 (BYPASS_CACHE | VERIFY_EV_CERT)
--> method = "GET"
--> priority = "MEDIUM"
--> url = "https://wpmanagedsecure.com/wp-content/uploads/2016/03/real_time_cloud_backups-1.png"
t=325820 [st= 1] URL_REQUEST_DELEGATE [dt=0]
Next we can see that no proxy was used (good), however the HTTP2_SESSION_POOL_FOUND_EXISTING_SESSION event indicates that the previously opened http/2 session was used (to the old IP):
....
t=325820 [st=0] +HTTP_STREAM_JOB [dt=0]
--> alternative_service = "unknown :0"
--> original_url = "https://wpmanagedsecure.com/"
--> priority = "MEDIUM"
--> source_dependency = 917963 (URL_REQUEST)
--> url = "https://wpmanagedsecure.com/"
t=325820 [st=0] +PROXY_SERVICE [dt=0]
t=325820 [st=0] PROXY_SERVICE_RESOLVED_PROXY_LIST
--> pac_string = "DIRECT"
t=325820 [st=0] -PROXY_SERVICE
t=325820 [st=0] HTTP2_SESSION_POOL_FOUND_EXISTING_SESSION
--> source_dependency = 915954 (HTTP2_SESSION)
t=325820 [st=0] HTTP_STREAM_JOB_BOUND_TO_REQUEST
--> source_dependency = 917963 (URL_REQUEST)
t=325820 [st=0] -HTTP_STREAM_JOB
I suppose Chrome could bypass the socket pools too when refreshing, but not convinced that is actually a good general policy.
@bnc, @mmenke WDYT?
As a work-around, I suggest clearing the socket pool cache using chrome://net-internals/ when you need this.
,
Feb 16 2017
,
Feb 16 2017
I'm sorry to hear that this slows down your work. I do not know your workflow, but could you enter the IP in the omnibox instead of the hostname to work around this issue? Also, I'm interested to hear if clearing the socket pool cache as suggested in comment #5 solves your issue. Otherwise, this behavior really sounds like working as intended to me. Why wait for DNS resolution if there is a connection already open to the IP address that the host previously resolved to? Honestly I am surprised that other browsers behave differently.
,
Feb 16 2017
If this were an application running on it's on node sure. But this is a shared hosting service so typing an IP address in isn't an option. This really comes down to migrating sites... which is what I do. I move sites from one server to another. I'll grab the data, update the IP in the host file and upload the site data to the new server locally to make sure everything works. When I'm done I can update public DNS with the new IP address and the migration is smooth. The problem is once I make that change I had to wait for chrome, delaying my work. I'll have to setup a test to confirm that clearing socket pools works. There is more to this issue. If the end users of the websites that I migrate previously had an open socket, and I've migrated the site. They won't get a new IP address until that socket times out. It's one thing for this issue to affect me, but for my clients and especially their end-users where I can't easily explain this issue. What about people using round-robin or HA DNS and the SSL decryption is performed by the nodes? How is it useful for Chrome to hold onto an old IP address for some undetermined amount of time when the developer has intended a new IP address to be used? So to answer your question... why wait for DNS resolution if there is a connection already open... because the connection that is already open is wrong? It's not the intended IP address I want to serve... I need it to check DNS to make sure it's getting the latest information. No disrespect, but my mind is kinda blown that you don't see the problem this creates. I just had an issue with a client where I migrated their site, but the old one was served instead. It was pulling info from the old server and not showing changes I had made on the new server. It was frustrating, to say the least.
,
Feb 16 2017
To make another point, if it is such a waste of time checking DNS when reloading for HTTP2, why does Chrome bother checking DNS with HTTP. What is the point of hitting the reload button? Why is there even a difference?
,
Feb 17 2017
,
Feb 17 2017
RE comment #10: Note sure HTTP2 is the right label, as this applies to HTTP1 as well.
,
Feb 17 2017
,
Feb 17 2017
Labels aside, do you understand what my concern is? Do you see the problem or do you consider this a none issue? If so why?
,
Feb 17 2017
@sean: I understand your problem and am sympathetic to it. The problem here is around the semantics of what Ctrl-R should do, and what the common cases are (where your use-case is more niche from a developer perspective). +kenjibaheaux who has been looking at the refresh functionality and how it relates to caching bypass: This is an example where user expectations do not match the implementation. Although the disk cache is bypassed, the socket pools are not (so may re-use an existing session). Although not tied to the refresh user feature, a similar problem came up in issue 128617 (where internally we want the ability to truly reload a URL).
,
Feb 17 2017
I can understand the complexity around the issue. I guess my only input to add as someone who administrates websites and is not an app developer and also supports end users... The web browser logic shouldn't be excluding the latest IP when refreshing data. It should take its order from an LB, DNS or whatever the guys like me decide because as efficient as it is to reuse a session. In the wild, the reasons are endless why the underlying IP might need to change and since it's not practical to push pop-ups on all websites and apps instructing end users to go to net-internals to remove sessions when those changes need to occur. The unsophisticated end user must have a practical way to make things right when they notice the website or app isn't doing what they expect. People supporting those users need a simple, practical solution to make their lives easy. Traditionally, it has been to hit the reload button to get the freshest version of the site.
,
Feb 20 2017
We should not make the current Reload any less efficient than it currently is. A hard reload seems to be a better fit for this: - Is this also an issue with a hard Reload (i.e. shift+f5, ctrl+shift+r)? - If it is then, any concerns about fixing the hard reload feature to also cover this use case? +toyoshim@ who did the actual implementation work and might have other suggestions.
,
Feb 20 2017
The use case of comment #8 looks too specific to a special workflow the reporter do. IMO, the report should solve this workflow specific problem in the work flow, as manually flush DNS cache. E.g., the old server can close all idle sessions once the new server is ready, or simply you can manually shutdown the old one. Or chrome://net-internals/#sockets "Flush socket pools" would work. I do not think such work-flow specific hack for developers should be considered for the design of user interfaces all users use. If we really need this, we would have flush socket pools button in DevTools?
,
Feb 20 2017
,
Feb 20 2017
For comment 16, I had this same issue doing a hard reload. This would be great if you could fix the hard reload to include this fix. For comment 17. I don't always have access to old server when I'm migrating sites. That's not always in my control. I'm also still concerned about when a migration occurs, that frequent users of the website, will still be sent to the old server and be "unaware". Also for support for the users that do contact me and notice something is wrong. It would be much easier instructing them to do a hard reload, than walking them through flushing sockets.
,
Feb 20 2017
I still feel this is not client side problem, but something server side developers consider all of things for the migration. As I already asked, DNS propagation will be already a little out of your control. Also in terms of HTTP2, sending GOAWAY frame in HTTP2 session looks a right approach to prompt active users to find the new server. If you do not have a permission to access the old server, ....... well it might mean you are not the right person to control it. Like malicious attackers going to hijack a Chrome session. Anyway, this is a feature request for web developers, and it is not for majority, but very limited developers who are doing something very special. And even for such special developers, this isn't daily use, and they can have some workarounds. I feel this is "WontFix".
,
Feb 20 2017
Is my experience being compared to what a malicious attacker does? An example of why I wouldn't have access to a server. I take on a client who is leaving a non-responsive service provider. I have admin access to WP-Admin, but not to the underlying server. I'm able to transfer their site with that information and make changes in DNS to point to the new server. As far as DNS propagation is concerned, yes technically it can be out of control. In practice, most DNS changes I make via GoDaddy or Cloudflare happen immediately with most of my clients on the west coast. There is no waiting multiple hours for updates to push through. I'm sure there are some relic DNS servers out there or ISP's that have slow policies. But for the most part, there isn't much waiting with DNS these days. I feel that your perspective is assuming that every developer or sysadmin has the same technical proficiency that you do. For example, how exactly would one send a Goaway frame in HTTP2 from WordPress? Or if you were on a Plesk or Cpanel shared environment... how exactly would you do it from there? As much as your deciding as weather to fix this issue by narrowing it down to my particular use case. I feel like it is possible that there are many other use cases and situations this issue might apply to that aren't being reported or considered. Lastly, if I were to record this event and compare it to how Edge and Firefox handle the same situation. Would that persuade any decision making about this issue for the developers in this group? My memory might be fuzzy at this point, but I do remember the last time this happened, testing a simple reload with Firefox and Edge and they worked to my expectation. Maybe they implemented this correctly because they see other issues beyond my own regarding this functionality as well.
,
Feb 20 2017
Anyway, I do not think reload should fresh socket pools. Reload is a tab oriented thing, but socket pools are shared with all opened tabs and so on. That says if reload refresh socket pools, it could affect other pages' behaviors, that should not happen. It may help some developers, but will be harmful for billions users.
,
Feb 20 2017
Also, I feel here is not a right place to discuss this. Probably you should raise this potential issue at IETF or W3C. I do not want to implement reload features that spec does not require.
,
Feb 20 2017
,
Feb 26 2017
I just had an issue arise, where I again needed to bypass a CloudFlare proxy with a local host file to hit the server directly. I cleared DNS local and Chrome, then cleared all sockets tried every kind of reload possible and I was still seeing a CloudFlare error, instead of any issue related to hitting the server directly.
,
Feb 27 2017
I think you should clear all DNS caches between your client end and server end, including any intermediates, like proxy. Please consider to investigate details and clarify what is wrong or different from the spec.
,
Feb 27 2017
It is technically impossible for us to clear the DNS cache on a proxy server. The HTTP request itself can request a fresh resource from the server, but there is no way to specify that the DNS request used to reach that server be fresh. (Also, if I'm reading correctly, DNS does not have a "bypass caches" flag.) The only feasible answer here is to use a short TTL when you expect a hostname to change, and wait it out. We could make it easier to clear the browser cache, but that won't fix the cache in the OS, your router, your ISP, any proxies, etc. |
||||||||||
►
Sign in to add a comment |
||||||||||
Comment 1 by eroman@chromium.org
, Feb 16 2017