Issue metadata
Sign in to add a comment
|
Windows: ERR_CACHE_MISS on subresources causing page load failures |
||||||||||||||||||||
Issue descriptionVersion: 53.0.2756.0 dev-m (64-bit) OS: Win 7 What steps will reproduce the problem? (1) Log in to americanexpress.com on an account with two cards (2) Attempt to change active card What is the expected output? (3) Active card is changed What do you see instead? (3) JS fails to execute (4) Examine dev tools (5) See that a JS file failed to load, due to ERR_CACHE_MISS (6) right click URL in dev tools and open in new tab (7) Close tab (8) Reload the Amex page (9) Change cards to great success
,
Jun 7 2016
,
Jun 10 2016
,
Jun 20 2016
Are the cache misses on POSTs?
,
Jun 20 2016
Nope. They're on GETs for short-lived resources. Gavin, care to share any details on the bug of your investigation? :)
,
Jul 6 2016
Moving this nonessential bug to the next milestone. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Jul 9 2016
i occasionally get this error on artist/album/song pages on https://bandcamp.com/, using chrome 53.0.2785.8 dev-m (64-bit) on windows 10 i attached a screenshot and a network log, the offending script is https://bandcamp.com/cart/contents.js with a long query string the console says it's cross-origin (when you're on <artistname>.bandcamp.com) and loaded using document.write, could that be related?
,
Jul 19 2016
Gavin, do you have an update on this? Is this related to a Finch experiment? I ask because this issue is, once again, occurring daily, at least within Dev (although due to a recent bad Finch push, I was forced to reset Finch seeds) The current variations for me in Chrome 53.0.275.8 are 6a89113b-e4b26eb4 16e0dd70-3f4a17df 31101bd6-f23d1dea b3888d8d-afba0f91 da89714-4ad60575 4a449931-f23d1dea 6345b824-3f4a17df 7c1bc906-f55a7974 ba3f87da-a6404135 f049a919-3f4a17df 31362330-3f4a17df 290c251-3f4a17df f15c1c09-ca7d8d80 dd4da2fc-ca7d8d80 43d0dd1e-3f4a17df 93731dca-3f4a17df 2e109477-bcf405c8 6d340565-ca7d8d80 9e243dd-3f4a17df 9e5c75f1-78bc645e 6488ba84-3f4a17df c3ad0f6b-ca7d8d80 6b121ae7-f23d1dea f5dd6118-2f5721af f79cb77b-3f4a17df b7786474-64e7d9a 23a898eb-4c21fc8d 74df3f1-3f4a17df 868bda90-f23d1dea 4ea303a6-8467a4eb d5b671a5-f23d1dea 7aa46da5-669a04e0 fe9bec35-80f9a33e 9736de91-ca7d8d80 dbffab5d-ca7d8d80 de03e059-f23d1dea 30e679f-3f4a17df ad6d27cc-c6d02f41 ca314179-ca7d8d80 69bf80fa-f23d1dea 867c4c68-3d47f4f4 6844d8aa-669a04e0 3ac60855-486e2a9c f296190c-9eabb163 4442aae2-6bdfffe7 ed1d377-e1cc0f14 75f0f0a0-a5822863 e2b18481-e1cc0f14 e7e71889-e1cc0f14 fe05be5f-97e7f871 61b920c1-e2a9a88d 46567c16-3f4a17df cd9fec2f-cd9fec2f 828a5926-3d47f4f4 b0dc61a1-3f4a17df Could you ack this bug with the update (see Comment #5) so that we can properly assess the risk of breakage?
,
Jul 25 2016
@gavinp: Gentle Ping. Hey, would you mind providing an update on the above issue? I really appreciate your help. Thank you!
,
Jul 25 2016
I was out, I'm back now, will update shortly
,
Jul 27 2016
This is definitely related to a finch experiment; and it's intriguingly windows only. The cause appears to be related to a filesystem distinction; I'm trying to narrow down which operations can cause that (weird) error. Should we turn down the finch experiment on Windows to make the error go away in the meantime?
,
Jul 27 2016
Looks like it's a truncation. Could be a doom failure with the novel windows doom mechanism, which is not 100% successful.
,
Jul 27 2016
OK, this is interesting: simple cache is not involved. human@'s bandcamp example is a system using the blockfile cache. Further, the request is coming in LOAD_ONLY_FROM_CACHE, probably from a bandwidth estimator: 929: URL_REQUEST https://bandcamp.com/cart/contents.js?client_id=CFF971279D82B79ED857B894AD916969D9ED9F01ED53B2CFDA08F76BC34B7A2A&bust=1468027144039&mm=desktop Start Time: 2016-07-08 21:19:04.041 t=709 [st= 0] +REQUEST_ALIVE [dt=22] t=709 [st= 0] URL_REQUEST_DELEGATE [dt=0] t=709 [st= 0] +URL_REQUEST_START_JOB [dt=22] --> load_flags = 33032 (MAYBE_USER_GESTURE | ONLY_FROM_CACHE | VERIFY_EV_CERT) --> method = "GET" --> priority = "MEDIUM" --> url = "https://bandcamp.com/cart/contents.js?client_id=CFF971279D82B79ED857B894AD916969D9ED9F01ED53B2CFDA08F76BC34B7A2A&bust=1468027144039&mm=desktop" t=710 [st= 1] URL_REQUEST_DELEGATE [dt=0] t=710 [st= 1] HTTP_CACHE_GET_BACKEND [dt=0] t=710 [st= 1] HTTP_CACHE_OPEN_ENTRY [dt=21] --> net_error = -2 (ERR_FAILED) t=731 [st=22] -URL_REQUEST_START_JOB --> net_error = -400 (ERR_CACHE_MISS) t=731 [st=22] URL_REQUEST_DELEGATE [dt=0] t=731 [st=22] -REQUEST_ALIVE --> net_error = -400 (ERR_CACHE_MISS) So, it looks like this is a cache-busted URL, so the request is kinda gonna fail in this situation. I think this is to be reassigned.
,
Jul 27 2016
Shivani, over to you.
,
Jul 27 2016
rsleevi: looking at your log, it could be the same thing. The relevant field trial is DisallowFetchForDocWrittenScriptsInMainFrame. Dev tools should have shown you a message like human@ reported, but that message is misleading; for many users on desktop it's just being done randomly.
,
Jul 27 2016
CC+ jkarlin @ bmccquade for more context.
,
Jul 27 2016
I can definitely confirm that no such message appears in DevTools.
,
Jul 27 2016
Thanks Gavin for investigating. The console warning starting with "A parser-blocking cross-origin script..." should display regardless of whether or not you are in the field trial, so if Ryan doesn't see it, it suggests that the issue on the amex site is due to something different. Ryan can you try to repro on the Amex site and verify that you don't see any console warning? If you don't see the warning, then it sounds like we may be dealing with two separate issues here.
,
Jul 27 2016
The DevTools warning message was added after 53.0.2756.0. That explains why it didn't appear in the initially reported use case.
,
Jul 27 2016
Ah, I see Ryan's message indicating that the message doesn't show up for him in devtools. Ryan, are you still seeing this issue on the amex site? And to confirm, you do not see the 'A parser-blocking cross-origin script...' message in devtools when you do encounter this issue on the amex site?
,
Jul 27 2016
When I re-pinged it on Comment #8, I did not see any such string in the console log. As the issue happens intermittently, I'll poke back, but it may be compounded by having reset all of my field trials due to a bad push to Dev a week or two ago.
,
Jul 27 2016
Thanks! You can force-enable the behavior by going to about:flags and enabling 'Block scripts loaded via document.write' If after enabling that behavior you see the breakage again, it suggests that might be the cause. If you don't see the behavior with that flag enabled, it suggests something else is causing it on the amex site.
,
Jul 27 2016
Also, I checked that Ryan's initially reported version 53.0.2756.0 did not yet have the dev tools warning code. human@ saw the issue on 53.0.2785.8 which had the dev tools warning implemented.
,
Jul 28 2016
Ryan, can you try to repro this on M53 or later and see if you also see the console warning mentioned when you visit the page? You can force enable the behavior in about:flags (see earlier comment for details). It's not clear if the issue you're seeing on the amex site is the same as the bandcamp issue also reported here. If you could include the URL of the script that fails to load on the amex site as well as the URL of the page you were on that'd also be helpful. Please reassign to shivani once you've repro'd. Gavin, could you also describe the process you used to repro this on the bandcamp site? I can't repro locally on my machine.
,
Jul 28 2016
Removing owner (so I don't get nagged to fix a bug I don't understand), setting Needs-Feedback (so I do get nagged if I don't get back to you :P)
,
Jul 28 2016
OK, just tried in 53.0.2785.30 dev-m (64-bit), and this time I got the console log. The source of this is https://nexus.ensighten.com/amex/amexhead/Bootstrap.js (Line 122), blocking access to https://service.maxymiser.net/cdn/americanexpress/js/mmcore.js It then also blocks the accounthome page (https://online.americanexpress.com/myca/accountsummary/us/accounthome?request_type=authreg_acctAccountSummary&Face=en_US&omnilogin=us_homepage_myca) on line 852, which is loading https://www.aexp-static.com/nav/ngn/js/inav_cc_r2.js?v=0701_01 *However*, under chrome://flags, "Block scripts loaded via document.write" is not enabled (that is, it's greyed out, Enable is a link). So it seems there's no way I can disable this feature to test. Perhaps you didn't glue up the Finch/FieldTrial to the flag?
,
Jul 28 2016
Thanks Ryan! Here's a page where we should be able to repro this without having to log in to the amex site: https://www437.americanexpress.com/smartsearch/search/ssearch.jsp On this page, Bootstrap.js writes the following scripts: <script type="text/javascript" src="//service.maxymiser.net/cdn/americanexpress/js/mmcore.js"></script> <script type="text/javascript" id="mmpack.0" src="//service.maxymiser.net/platform/us/api/mmpackage-1.8.js"></script> I think I came across one other site where the maxymiser domain was involved in document.writes. It seems they're an A/B testing company. We should reach out to them to see if we can get them to migrate to async script loading.
,
Jul 30 2016
Bryan, The net internals log that Ryan provided privately had the LOAD_ONLY_FROM_CACHE flag on a single javascript resource, and was opted in to the trial group for the doc write blocking, which seemed quite determinative to me on this matter.
,
Jul 30 2016
Bryan, I didn't repro it on the bandcamp site. I reviewed the code, and found the combination of the console log on the one case, the two failures being opt in to this trial, the two failures using difference caches, and the LOAD_ONLY_FROM_CACHE flag being present quite persuasive. There's very, very few code paths that can set that flag on a JS request. Quelle coincidence if it isn't the cause, so that was good enough for me.
,
Jul 30 2016
Thanks Gavin. I saw the net internals trace with a bandcamp URL and assumed you'd repro'd. Sounds like that came from the netlog attached to the bug. I just misunderstood where you got the net internals from. After spending more time on bandcamp.com, I'm able to repro and confirm they are still using doc.write to load scripts. It turns out this is a very addressable case in our implementation. The site doc.writes a script on the origin https://bandcamp.com/: https://bandcamp.com/cart/contents.js?... This script is sometimes requested from subdomains of bandcamp.com, for example https://eviltwinrecords.bandcamp.com/album/hip-hop-instrumentals-vol-i The doc.write script blocking intervention already has a provision for allowing same-origin scripts to load, since those are clearly first party content. We'd debated whether we needed to generalize this heuristic to TLD+1 matching, and decided the complexity wasn't worth it until we had an example site where this would be beneficial. Looks like bandcamp is our site. I opened bug 632986 to address this particular case. Let's move the bandcamp case to that bug, and focus this bug on the amex case.
,
Aug 2 2016
I spent a little time learning about ensighten, whose script is performing the document.write of other scripts. Ensighten seems to provide an A/B testing and optimization solution. Here is a doc where they talk about sync vs async: https://www.ensighten.com/blog/deploying-optimization-through-tms-must-have-checklist/ "Optimization tools are a bit unique in that it is necessary to deploy their tags Synchronously to the page, in order to avoid the dreaded “flicker” effect in which the default content briefly appears before it is switched out with the alternate version from your test. ... Sure, you can load Ensighten asynchronously, but to fully avoid flicker (and enjoy other benefits), load Ensighten synchronously at the head, knowing that our clients trust us to speed up their site performance." They're correct that loading async and then changing content can cause flicker, depending on what the script being loaded actually does to the page. There are other techniques that are less performance harmful to mitigate the issue, however, such as hiding the specific page elements to be modified by the A/B test until the async script has loaded, while allowing the rest of the page to finish loading. This adds some complexity to the solution. Sync is certainly the simplest approach, but it has the drawback that it slows down page loads. The bit at the end "knowing that our clients trust us to speed up their site performance" is just wrong, however. Loading sync can't speed up site performance. I'm assuming this is a marketing line to try to address clients who are raising legitimate concerns about sync loading slowing down their pages. Here's a side by side page load for a page that uses ensighten, on a simulated DSL connection. On the left, we see the page as is with ensighten enabled. On the right, we load the page while blocking ensighten. We can see that the page clearly gets faster as a result of not loading ensighten. This is because the ensighten script is loaded synchronously, and then performs a series of document.writes to load additional scripts synchronously, which are particularly harmful to loading performance because their loads are serialized. It's worth noting that this is the performance observed on a DSL connection. On a slower connection, such as mobile, the impact would be even more significant. https://www.youtube.com/watch?v=PcUdgmuKFXU&feature=youtu.be And here's the WebPagetest result A/B waterfall: http://www.webpagetest.org/video/compare.php?tests=160802_KD_1355-r:7-c:0,160802_RE_135D-r:3-c:0 It's worth noting further that by loading these scripts synchronously, both the ensighten and maxymizer domains become 'single points of failure' for these page loads. If either domain goes offline, these pages will hang for 20 seconds, until the script load times out. More on scripts as single points of failure here: https://www.stevesouders.com/blog/2010/06/01/frontend-spof/. This is yet another reason for third party script providers to avoid synchronous loading. All that said, I think it's clear that we could not roll this intervention out to all users, as it breaks some pages, as we knew would happen. Even though breakage is rare, the tradeoff is clearly not worthwhile for users on faster connections.
,
Aug 2 2016
Removing ReleaseBlock-Stable, since this issue is caused by a field trial on dev channel. I'm going to turn down the field trial on dev, since we have collected enough data from it.
,
Aug 3 2016
,
Aug 7 2016
More examples of sites broken (and I'm on a 100MB down/10MB up plan) - http://wifi.boingo.com/wireless-internet-hotspots/ is blocking http://ecn.dev.virtualearth.net/mapcontrol/v7.0/7.0.20160525132934.57/js/en-us/veapicore.js from mapcontrol.ashx?v=7.0:1 This intervention definitely seems to have... issues. It would be good if you could prioritize rolling it back now that you've collected data :)
,
Aug 7 2016
Thanks Ryan. I confirm that this page is broken by the intervention. I appreciate your attention to detail here. This has been running on dev for over 2 months and we've received only a handful of reports of brokenness, which is in line with what we expected, but we aren't going to learn anything more by keeping it running, so I agree we should put it on pause. jkarlin, WDYT about turning this intervention down on dev and canary, at least until we have new changes to evaluate? BTW Ryan the console warning will continue to show up whether or not the intervention is actually active - the goal for the console warning is to get developers' attention to encourage awareness of the performance issues associated with loading scripts via document.write. We need to update that warning to link to a page that provides more information so devs know how to fix the issue - that's something we're planning on addressing soon.
,
Aug 9 2016
For future interventions like this, could we: 1) Make sure that the flag (chrome://flags) is properly configured so that a user can opt-out of Finch, if appropriate? The way it's implemented is a Boolean (Enabled/Disabled), which prevents me from being able to actively opt-out when it's set via Finch. If you compare with many of the other flags, they have a Enabled/Disabled/Default state, which still allows experimentation, but allows user control. 2) Provide better surfacing of this on the onset. Even as a user running Dev mode (at home, Canary at work), finding out when I'm running into this issue can be subtle, and the behaviour it causes for various sites just borders on "not working". While I can appreciate "a handful of reports" coming in, realize that most users aren't going to report when it just "doesn't work", certainly not to the point of opening dev tools. If it wasn't because it was part of banking, and because of the SimpleCache experiments, I would have just chalked it up as a transient server issue. 3) Better communicate the state. The chrome://flags makes no mention of the bandwidth issues, but the console.log() does. Worse, on a home connection, the console.log() seems misleading at best, because I'm not under spotty network connectivity. It's unclear if you're blocking after a timer or unconditionally, but it would be good to make sure your user facing messaging is consistent with the implementation. 4) Better proactive testing. For example, could you have used RAPPOR to monitor the top N domains for which a user would have experienced such a block? That would have allowed you greater insight into the possible (negative) user impact before you deployed the experiment for the (positive) user impact. 5) Better communication of success/failure metrics. We traced this down to the experiment almost 2 weeks ago, and at least a week ago, you concluded you have enough data. At what point do we disable this? I'm concerned that, while a good intervention, it hits a very rough edge of "Things don't work consistently or reliably and fail subtletly and significantly", and that seems to be a high bar for experimenting on large populations of users. I think this has the potential to offer good insight into possible interventions and sharp edges, but I'm also concerned that this might show we're a bit too cavalier or busy to manage the interventions when they break. At this point, I'm running into the issue on some site on a daily basis, and I've stopped updating this bug, because it doesn't seem that one-offs are valuable here, and you've already collected your data. It would be great not to have to switch off Dev and reset my profile.
,
Aug 9 2016
1) filed https://bugs.chromium.org/p/chromium/issues/detail?id=635743 2) previous discussions of surfacing UX for interventions have been met with skepticism (ideally, the breakage should be minimal and easily fixable through existing user gestures, such as reload). Maybe, there is something to be said about making it easier for the dev/canary/beta(?) crowd to discover these and help us catch regressions. 3) Agree that the messaging can be improved. One concrete thing that we have planned: have the warning message point to the intervention spec. 4) I'm skeptic of RAPPOR for this particular context but I agree that we should use this as an opportunity to think about other proactive testing/discovery means (couple of ideas here) 5) Beside the metrics side, the experiment on dev was intended to run long enough to discover remaining problems. Ideally, a strong solution for #4 would have been better. Ultimately, the team and the leads made a decision based on the assumption that the breakage would be minimal (e.g. lab tests), easily fixable (a reload away = natural user gesture to fix broken pages) and reasonable given how dev/canary are positioned QA wise ([1]). FWIW, we will probably take a similar approach for a few other upcoming interventions, especially when the upside far outweighs a remaining risk of breakage (despite proactive efforts). Example: Scroll Anchoring. Thanks for your feedback. [1] https://www.chromium.org/getting-involved/dev-channel#TOC-How-do-I-choose-which-channel-to-use-
,
Aug 9 2016
There may be a bug in the reload code; I tried again tonight and it didn't make the Amex problem go away. I also leave Chrome running non-stop, and notice the problem returns sporadically - that is, within one browsing session, the problem manifests, resolves, than manifests again. That's part of what I meant by non-determinism. It's unclear if it's by server action or not, but it contributes to the feeling of "not working" (that is, buggy), compared to being an active intervention. I would also note that I don't think [1] is reasonable when it's intermittently breaking a site. It would be reasonable if you intentionally plan a change in behaviour - you WontFix the bug and move on. Or, if it is a bug, you prioritize fixing the bug so you don't regress Dev/Beta. Either serve to clearly communicate expectations - it won't happen or its being fixed. However, having functional behaviour differences - code running in Dev but disabled in Beta - seems to be at odds with the Finch guidelines, because it means your Dev users aren't testing the code Beta users will run. This is why PMO-led experiments (like SyzygyAsan or DCHECK or friends) run as time limited trials, so that any breakage is temporary (O(days)). That's certainly what Dev/Canary advertise - you suffer the breakage because the fix is always a day or two away. Weeks is less than ideal. For sure.
,
Aug 9 2016
I looked into it a bit. I think the main reason behind the intermittent breakage / the inability to fix it by reloading the page is that a few of the document.write scripts have a query string that depends on what the user has been / is doing. In other words, we never have the right copy of these in the cache :( Hmm, strawman: for V1, don't intervene for scripts with potentially unstable bits in their URL (query parameters, url fragment). Add a UMA to find out how often this happens and if it's worth finding a way out. Bryan, Josh and Shivani, wdyt?
,
Aug 9 2016
Thanks again Ryan. On reload, we disable the intervention entirely. This is pretty well tested and I've confirmed it working in many contexts myself. Disable on reload is unaffected by query strings etc. The fact that the amex site remained broken after reload suggests that the Amex breakage may not actually be caused by the doc.write blocking. We never confirmed that was the cause of the breakage. We should do that. Many scripts get blocked by this intervention; most of the time it doesn't actually break the site. The fact that it remains broken on reload suggests to me that something else is going on here, and the blocked script may be unrelated. I'd like to debug this further to know for certain. I won't be able to investigate this myself as I am OOO.
,
Aug 9 2016
Sorry for the confusion. Reload does fix the current page but because the doc.write script url morph (different query params) as the user navigates or submit forms, the website keeps breaking. Reloading after each navigation or actions isn't practical :/
,
Aug 9 2016
Quick final note from me: * we plan to proceed with this intervention for 2G and 2G-like connections given the clear improvement in both load performance and successful page loads * no plans to deploy this intervention beyond those connection types since we also know that it breaks pages without delivering the same magnitude of improvement in page load performance or page load success rate * let's stay focused on depoying this on 2G; we can consider things like heuristic disabling in certain contexts but i don't want us to get distracted * we might spend additional time in the future to figure out how to deploy to additional connection types if we improve heuristics etc but let's not spend time on that right now
,
Aug 9 2016
final note: let's turn this off on dev channel as soon as possible - we aren't getting useful data & as we know it breaks pages, so there's no reason to continue running on dev. josh or someone else, can you disable on dev?
,
Aug 9 2016
Yep, we'll turn down on dev channel asap. I was away on vacation until this morning. Still catching up on email.
,
Aug 9 2016
I will disable it on dev channel today. (Was out of office yesterday)
,
Aug 9 2016
Just saw Josh's CL on disabling it: https://critique.corp.google.com/#review/129744980
,
Aug 9 2016
Given these issues, who is the decision maker for the launch to 2G? Without wanting to appear disrespectful, I am concerned with this, and would like to escalate. I really appreciate Kenji's help in understanding the sharp edges, but subjectively, it feels like you're dismissing those concerns, Bryan, rather than trying to make sure our users have a reliable AND fast user experience. I would like to suggest there are still issues to work out before we push this to all users.
,
Aug 9 2016
I could play decision role. Will look some more into this.
,
Aug 9 2016
Just a side note, which may or may not be relevant. In looking at the top errors reported by the Net.ErrorCodesForMainFrame3 histogram, I noticed that "CACHE_MISS" (value 400) is one of the top. Ryan and Charles suggested that this intervention might be the reason. If so, then I'd suggest that this is affecting way more users than you may have been previously aware. For example the CACHE_MISS error accounts for 0.33% of all pageloads on Android in India yesterday, m https://uma.googleplex.com/p/chrome/histograms/?endDate=08-08-2016&dayCount=1&histograms=Net.ErrorCodesForMainFrame3&fixupData=true&showMax=true&filters=platform%2Ceq%2CA%2Ccountry%2Ceq%2CIN%2Cisofficial%2Ceq%2CTrue&implicitFilters=isofficial
,
Aug 9 2016
Main frame would more likely be a different cause than this (this should be for subresource loads)
,
Aug 9 2016
I also think it's worth digging into, but is unlikely to be this specific issue.
,
Aug 10 2016
I spent a bit more time with amex. The repeatedly failing script (apparently some sort of payload/feedback for Maxymiser's A/B framework) does not appear to be crucial to the user experience. It's repeatedly failing during the user's journey on the website because its URL has a breadcrumb parameter. So, I withdrew the suggestion that this might be a V1-worthy issue. We are nevertheless looking into if/how we can improve the reload failsafe mechanism. Ryan, I could not find any aspect of the website that was utterly broken and not fixable via a simple reload action. That being said, I don't have an account nor 2 credits cards to test your user journey. If you don't mind, could you try the following: 1. open devtools 2. proceed to the step where the functionality was broken 3. copy paste the console warning and errors including the full URLs of the failing scripts. Note: you might want to share them privately or do the diff yourself (see next steps). 4. reload the page, copy paste the warning and remaining errors 5. confirm whether or not the functionality works again 6. continue on with your user journey up to a point where something else is broken 7. repeat step 3. This way we would know if the problem you are seeing is: a. attributable to the intervention b. and falls into the shortcomings of the current reload failsafe mechanism Best,
,
Aug 10 2016
Kenji: Did you mean to delete your comment? It makes the account management interface completely unworkable. You cannot access any of the sub-page navigations from the account.
,
Aug 10 2016
Sorry for the deletion. The comment is roughly the same, save for a spelling mistake and the addition of a note that we are thinking about a followup to address the shortcoming of the reload failsafe. It would still help us if you could share the info asked in my previous comment.
,
Aug 10 2016
1) The issue is intermittent (presumably, depending on the object cache parameters). The Maxmizer block is not the one that blocks the full functionality of the page. That's https://www.aexp-static.com/nav/ngn/js/inav_cc_r2.js?v=0701_01 , which presumably has a different cache lifetime. If this is blocked, the entire functionality of account management is broken (can't change pages, can't open chat moles). 2) When blocking the Maxymizer's A/B framework, the URL varies every different page you're on (the ref page). As a result, every time you click a new link, a new block happens - and a new refresh is needed. Even though #2 is less serious than #1, I think it's grossly overstating the impact to suggest that simply reloading fixes - when every new navigation (e.g. changing cards, viewing account summary) requires an additional click of Reload to get functioning. Add in the account timers, and it seems if you're logged out for inactivity, the process restarts (that is, reload every page transition)
,
Aug 9
|
|||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||
Comment 1 by rsleevi@chromium.org
, Jun 7 2016