Issue metadata
Sign in to add a comment
|
operating in china behind a VPN @ 6 to 20k / sec on HTTPS pages is completely intolerable
Reported by
luke.lei...@gmail.com,
Dec 18 2016
|
||||||||||||||||||||||||
Issue descriptionUserAgent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36 Steps to reproduce the problem: 1. go to china (anywhere will do) 2. use a VPN in the UK 3. set up an HTTP proxy on the VPN server 4. configure chromium to use the proxy 5. try connecting to a page. any page What is the expected behavior? pages to load.. extremely slowly, but i expect them to load. What went wrong? where do i *begin* to describe what goes wrong here, there's so much it's almost unreal * firstly, there are a vast number of ssl connection timeouts * the next layer is ssl *handshake* timeouts * the layer after that is an interminable number of "RELOAD" page errors * the layer after that is "connection reset" errors * if the page *does* load, it often doesn't have the CSS included because that *also* failed to load (and wasn't retried properly) * the "aww snap" error occurs about 5% of the time * HTTP posts cannot be completed (and will not work on refresh) * once in every 200 page loads the *ENTIRE* browser hangs up and has to be terminated with "kill -9". no CPU usage is shown (the process is idle) * on a regular and monotonous basis the page cache becomes corrupted, to the point where it simply cannot be trusted and has to be cleared regularly (every few days), thus completely defeating the object of *having* a cache in the first place. basically due to the interruptions and failures, the "partially complete" page is stored in the cache, and is replayed *INSTEAD* of the (correct) complete pagee. basically, every single possible connection error that could possibly occur and has been experienced on a tolerably low level is magnified to levels beyond acceptable levels, and into complete insanity. connecting to accounts.google.com two days ago took ***TWENTY*** retry attempts, each one timing out after three to five MINUTES. after two hours it was finally possible to select the required menu option (set up a project so that i could register for oauth2 and use offlineimap, and stop using gmail... which is completely intolerable and unreliable... EVEN WITH BASIC HTML) Did this work before? N/A Chrome version: 53.0.2785.143 Channel: stable OS Version: Debian / Testing Flash Version: Shockwave Flash 22.0 r0 reproducing this *without* going to china should be quite easy: set up a network filter which delays packets randomly by anywhere between 350ms and 10000ms, throttles the connection to between 6 and 25k / second, and intermittently drops between 20 and 80% of packets. the 80% packet loss conditions will simulate the scenario where the internet is being used by state-sponsored hackers (pick a country of your choice attacking the country of your choice) using various brain-damaged IoT devices with default credentials "admin, admin", saturating the links between china and the rest of the world in the process. it would be *really* handy to have the underlying HTTP and HTTPS sockets operate on top of a much more robust layer which does *not* assume that the connectivity is of the same quality as that experienced under "ideal conditions" which we assume exist absolutely everywhere, taking for granted in the west.
,
Dec 18 2016
hilariously, on submitting this bugreport: * hitting "POST" resulted in "bugs.chromium.org took too long to respond", RELOAD * reload resulted in a *second* error which gave a "retry" message... * ... that resulted in "whoops this page does not exist" * dodgy use of the "back" button recovered the report * HTTP POST / submit finally worked... * ... only to have to be forced to REFRESH the page FOUR TIMES to see my own report! * the "your email is visible" really annoying AJAX popup (the one that takes away keyboard focus...) simply WOULD NOT GO AWAY until the page was finally loaded... * ... which took nearly five minutes from start to finish. and this is just to submit one bugreport! luckily (this time) chromium didn't lock up or report "snap". that usually happens on page-refresh.
,
Dec 19 2016
,
Dec 19 2016
Trimakasih Pada tanggal 19 Des 2016 7.09 AM, "a… via monorail" < monorail+v2.463707639@chromium.org> menulis:
,
Dec 19 2016
ha! equally hilarious: the double-post wasn't deliberate. ordinarily i'd delete a double-post-submit but i'm leaving it in to demonstrate that yes, these circumstances (operating out of china) pretty much hit every single reliability and connectivity-related bug that could possibly be encountered. y'know... it's almost actually worthwhile sending chromium developers to china for a month or two, just so that they can (a) experience this for themselves and (b) so that they go "oh my god this is intolerable, i *have* to fix it" and (c) they go "whoopeee! all those unreproducible unreliability issues that people report and i am forced to ignore because they're very low priority under normal conditions, i get to experience them reproducibly all the friggin time, yay!" ok, doing "the usual"... take a copy of the contents of the dialog box *before* hitting "submit"....
,
Dec 19 2016
Adding proper bug label for help in triaging this better.
,
Feb 10 2017
hi i appreciate you're adding a component "network vpn" but that's misleading. you really need to create a special category "network ridiculously-ultra-slow-and-ultra-unreliable". the proposed new category is one where the new goal of working well in say... india or other 2nd-world emerging market... will feature heavily. honestly, i really meant it: get visas for the core chromium team, tell them they're going to china for 30 days, lock them in a hotel in shenzhen or zhuhai and let them get on with it. the cheaper but less effective way is to create a simulated network environment as recommended above. btw this networking problem is *ALSO* throwing up (helping to trigger) a GPU-related bug... because you forgot to put in some mutexes and there's a race condition between the networking code and the gpu-handling code. debian bug no 848895
,
Feb 11 2017
https://news.slashdot.org/comments.pl?sid=10230505&cid=53839091 this would be a really *really* critical omission on the part of the chromium development team if chrome does not have this capability!
,
Apr 21 2017
Perhaps this is related to this issue (https://bugs.chromium.org/p/chromium/issues/detail?id=380995) which was merged into (https://bugs.chromium.org/p/chromium/issues/detail?id=384149).
,
Apr 21 2017
first one is visible, second one says "you do not have permission to view this bugreport". not really very impressed with that: merging a bug and then absolutely preventing and prohibiting people from accessing it. yes the first one is pretty similar circumstances... but *not* identical. in the setup that i have it was the *proxy* that was being accessed over the VPN. so as i explained in the repro advice: writing a network driver that simulates large delays, bandwidth throttling and massive amounts of packet loss should be sufficient to replicate this. it's *NOT* specifically related to VPNs: comment #6 is totally misleading and this bug has been completely miscategorised.
,
Jul 12 2017
https://tech.slashdot.org/comments.pl?sid=10845337&cid=54785169 this is a very useful extra piece of information that demonstrates what i am talking about. basically this is a person *IN THE UNITED STATES* who happens to have *VERY OCCASIONAL* problems with HTTPS/SSL web sites. as a highly technical person (a long-time reader of slashdot) he gives a detailed report of these [very occasional] problems. but when i was in china and was experiencing these difficulties WHICH HAVE NOTHING TO DO WITH A VPN AS MISTAKENLY THIS BUG HAS BEEN MARKED AND THERE HAS BEEN NO RESPONSE OR ACTION TAKEN TO CORRECT THAT the absolutely truly dreadful connectivity *100% reliably* triggered this otherwise "hard-to-reproduce" problem and many many others. it *really is* important that you, the chromium team, begin to understand and appreciate how crucial it is that you add packet-throttling, high latency and deliberate random packet dropping to the chromium test suite, *particularly* on HTTPS traffic.
,
Jul 13 2017
Changing the bug component to more generic Internals>Network component and cc'ing mmenke@ and yzshen@(https://cs.chromium.org/chromium/src/content/network/OWNERS) for help in debugging this further.
,
Jul 13 2017
magic... thank you, and so sorry for having to raise a bugreport about a bugreport :)
,
Jul 13 2017
[+cbentzel, +lassey] Dealing with bad connections is definitely an important area where we have a lot of room for improvement. Unfortunately, dealing with bad network conditions well in a way that doesn't cause problems in other cases (Like some resources being blackholed, or high bandwidth cases) is not easy. It's non-trivial to determine if a site is down, or if you're just on a bad network, and in the latter case, what we should do about it. It's one thing to block a page load by retrying a synchronous script load for a script vital to basic page functionality, and another to block an entire page load while trying a synchronous ad script. The data use team is looking at some improvements that may help mitigate the problem (Fewer requests at once while on bad networks). This will hopefully help in at least some cases - for example, in the buffer bloat case where the upstream servers gets a ton of data destined for us and starts dropping packets from a queue just for the user's machine.
,
Jul 13 2017
well... would you agree that some sort of userspace networking proxy (similar to BSD bandwidth throttling kernel modules) which had two modes, one for "raw" traffic and another for fronting as an HTTP/HTTPS proxy, be a good starting place for people to be able to 100% reliably be able to reproduce these kinds of network conditions? also i would hesitate to recommend going to the lengths of trying to *automatically* identify bad conditions, instead to have an option "no attempt to deal with bad networks, automatically detect and deal with bad networks, always assume bad network conditions". the thoughts that i had were, people really *don't care* to hear about page timeouts (loading straight CSS, HTML, images and javascript) - they just want it loaded. just... retry already! and only report an actual error if it fails after a set number of times. HTTP POSTs and AJAX calls? now that's a different matter entirely. HTTP POST calls... my feeling is that... well... as demonstrated by comment 1 and 2 (that really and truly genuinely was not deliberate!) users will just... have to put up with that, *but*, if the underlying socket hasn't even been established, that *really* should be retried transparently until it succeeds (up to a certain number of retries). AJAX calls? again: establishing the connection: should be retried until it succeeds. but the actual data sending? again, not really your problem. if the developer hasn't coded properly for timeouts, that's *not* your problem: you can't fix the fact that they haven't bothered to take into account network failures. gmail is one of the very few programs that's properly resilient in this way... but it's badly let down by the HTTP and HTTPS timeouts, socket errors and so on. basically when using gmail, the HTTP "basic" version was so fricking unreliable that i actually had to go back to using the AJAX version! the hilarious thing was that i could actually use gmail for about a week (as long as i went absolutely nowhere near a page-refresh) before something went tits-up. so i would be in e.g. Hong Kong, s2disk my laptop, take it to China and i would be able to use gmail for about 1 week, the AJAX calls would continue to retry, retry retry and eventually succeed. *BUT*.... the moment something somewhere in the browser cache expired... the moment the gmail team decided that some sort of "update" was required... game over :) in summary then about AJAX, i believe it's perfectly reasonable to expect the web developer to code robustly, making it basically *not* your problem, but that the network layer - the bit where all the problems are with SSL timeouts, socket timeouts, TLS timeouts, dropped connections etc., that i believe you *really can* make a huge difference just by doing automatic / transparent retries and not worrying the user about how many times that has to be done. they *really* do not need to know. the only other thing i can think of: if possible it would be really really handy for HTTP/1.1 "continuation" to be activateable (again, not-at-all / automatic / always). particularly on static HTML, CSS, javascript, images and text. don't know how that would work as far as cache consistency is concerned, or if it's even a good idea given that some sites *may* serve a different page on the "continuation". i don't know if there's checksumming provided in HTTP headers which would allow guaranteed correct identification of a genuinely static (or otherwise identical) page? anyway lots to think about, but... augh... something's *got* to be done. i wasn't kidding about how bad things were. seriously - you should look at getting visas to go visit china on holiday for a couple of months. *don't* tell anyone in case the China Government notices that you *intend* to use a VPN: if you do that, make sure to take a specially-compiled version of openvpn that XORs a random sequence over *all* packets (there's a modified version available on github somewhere). this trick stops the deep packet inspection of the "Great Firewall Of China" dead in its tracks, leaving you free to "experience" the wonders^Whorrors of 80% packet loss, 30 second round-trip packet latency and 6-15k/sec network speeds for yourself :)
,
Jul 13 2017
We do automatically reload when the main frame completely fails to load (Except for posts). The problem is everything else. With XHRs, it's not remotely clear what we should be doing. Perhaps it's a real time game or something, that would really like to know when it's disconnected, or perhaps it can fall back to another server, or back off on the network requests. There's a pretty high bar for additional settings in Chrome, and I'm not sure a "retry really hard" would meet that bar. That having been said, retrying is definitely worth thinking about - we could probably at least retry out-of-band non-blocking resources (async JS, images, subframes, etc), without breaking pages too badly (Though scripts that are built around onload events could get funky). Another case where we run into similar sorts of issues is people who are on 2G (Which is still common in a fair number of countries).
,
Jul 13 2017
Improving the loading performance of Chrome on slow networks is pretty important, and we have couple of experiments in flight, and few more (including connection retries) that we plan to work on this quarter.
,
Jul 14 2017
> With XHRs, it's not remotely clear what we should be doing. it seems pretty clear to me (as outlined in comment 15): the actual connection should be retried until there's a reliable socket, and the actual XHR should trust the developer [and assume that they've been competent enough to add and properly use the "timeout" feature/function]. > There's a pretty high bar for additional settings in Chrome, > and I'm not sure a "retry really hard" would meet that bar. it's never a good idea to throw "automatic second-guessing" at users, and this is pretty damn important. if you *don't* consider it to be worthwhile adding as a setting, you're effectively telling users on unreliable connections, 2G, dial-up modem and satellite that they are 2nd rate internet citizens who can take a running jump. and that's... a lot of people, world-wide, who *won't be able to report bugs telling you that they're completely left out*. why will they not be able to report bugs? because bugs.chromium.org is accidentally designed *EXCLUSIVELY* for the "1st world 1st rate internet connection community"! no person on a 2G/dialup internet connection is *ever* going to tolerate 5 *MINUTE* page load times [PER ATTEMPT]... just to report a bug in a web browser! > we could probably at least retry out-of-band non-blocking resources > (async JS, images, subframes, etc), without breaking pages too badly anything that says <script language=js....> and so on, yes i'd agree. anyone who has designed a website so that the javascript changes on a per-load basis, particularly given that HTTP is a stateless protocol, is *really* asking for trouble even on a fast, reliable connection. > (Though scripts that are built around onload events could get funky) i don't see that as being a problem, but i may not understand completely. could you elaborate? the way i see it, onload would not be triggered until the page was actually loaded. now if a website is poorly designed such that it has terrible race conditions that rely on 1st-world network speeds to "mask" their existence, *then* there might be artefacts or other issues... but that's *not your problem*: that's the website developer's problem. what _could_ be a problem is if developers used the trick of extending the DOM to add <script language=js... tags, using XHR to load text that was tacked on to the page. ... and yes that is a legitimate technique that i pioneered with pyjamas to emulate python "import {module}" functionality. it was really tricky to get js variables into the same DOM namespace but it was successful. but thinking about it: even this i would say is again, "not your problem". the timeout function of XHR should be properly used as a JS-level "retry" mechanism. if it's not? tough luck... but it's not your problem. honestly i believe this is much much simpler than it first appears: retry socket/SSL connections as a first (independent) priority, then split to "if it's static, do transparent retries" and "if it's POST or XHR, never do retries". that seems to be pretty straightforward.
,
Jul 14 2017
> Improving the loading performance of Chrome on slow networks > is pretty important, and we have couple of experiments in flight, > and few more (including connection retries) that we plan to > work on this quarter. great to hear: i take it those are #704339 and #737614 at the time of writing? #704339 - yes the proxy timeout should definitely, definitely be configureable. #737614 yes absolutely the max number of connections should definitely be reduced considerably: otherwise you just end up increasing latency due to overhead and bufferbloat, and that pushes you easily and instantly into "connection timeout" territory, which results in cascading failures. this is where extra settings such as "max connections" (if they don't already exist) could be critical: if i knew it was possible i know enough to have wanted to set the max connections to ONE when i was in china. but there is a lot more that really needs to be done, tb: socket-level / SSL-level transparent retries would massively increase useability. *2 hours and 20 refresh attempts* just to be able to enable oauth2 on accounts.google.com, tb! that's *one* page!! :)
,
Jul 14 2017
If a page relies on the onload event to run scripts, and is functional without images, then it's probably best not to delay the onload event, even with broken images. So I'm not sure we want to delay onload for broken images (Particularly as we have no idea if they'll ever load). So it's probably better to trigger the sites onload event, and retry images in the background - which could also have consequences for the JS. The problem is there are conditions where diametrically opposed behaviors are the "right" thing to do for the user.
,
Jul 14 2017
> So I'm not sure we want to delay onload for broken images > (Particularly as we have no idea if they'll ever load). > So it's probably better to trigger the sites onload event, > and retry images in the background that sounds like you're saying that the existing load behaviour of chromium wrt images should not change, although i don't precisely know exactly what that behaviour is in order to be able to say. https://www.w3schools.com/jsref/event_onload.asp by "onload" i *assume* you mean "the main document's onload handler" btw, just to be absolutely clear. so the "onload" event of the document is only fired when absolutely everything has completed loading. so if that's the case, why would you want to make any dramatic changes to the behaviour of chromium just because images happened to take a bit long?? that sounds like a really dangerous and a drastic, drastic change in the behaviour of chromium with a lot of unintended side-effects. bottom line: there's no way that changing the behaviour of onload should be modified just because images, CSS or static javascript happen to have a transparent "retry" occurring at the network socket and the SSL / TLS level. now, *image* onload events, the samee logic applies... but this time there *is* a link. but... just because the image loading happens to take longer (because of transparent retries), the existing behaviour should *not* be modified just because it takes longer to get the socket properly established. i.e. *when* and *only* when the image has been fully loaded, then and *only* then should the onload event be fired. i trust that you can see that it would be an extremely bad idea to change the behaviour of any web browser wrt *any* network-related event handling. XHR errors and timeouts, onload - all of it *must* stay the same. this *really is* really quite a simple and straightforward proposition: make both socket opening and SSL/TLS establishment transparently retry. simple as that [at the same time as drastically reduce the number of parallel connections a la #737614].
,
Jul 14 2017
The issue is we don't know if the images will take "a bit long", "incredibly long", or "will never load before the heat death of the universe", or if the page would be useable without the images, if we just ran OnLoaded earlier (Admittedly, such sites should probably be using OnDomReady instead).
,
Jul 14 2017
> The issue is we don't know if the images will take "a bit long", > "incredibly long", or "will never load before the heat death of the universe", really: not your problem. > or if the page would be useable without the images, again: not your problem... because any web site should be designed to have img "alt" tags. what you're describing could (and does) happen on a regular basis with any web site. images _could_ be slow anyway... and web developers are supposed to design the site so that it copes with asynchronous loads (of images in this case) that could complete in any arbitrary order. now, in the cases where the image load - even if there is transparent retrying occurring - *really* doesn't complete, then... well... tough. at least the network layer / SSL layer made *some* effort to get the damn image loaded. and that would be a hell of a lot better than the current situation. basically, the problem of "it might not complete or might take longer" is genuinely "out of scope". you can't solve the issue entirely. okay... if you had a cacheing proxy that collaborated with the client, and that cacheing proxy had a means to indicate that "yes, it *really really* had managed to completely load (in its entirety) all images, all CSS, all static JS and all static HTML, and everything's ready for you to grab, just please keep retrying if you get stuck or disconnected half way through downloading, we'll get there in the end"... ...then under *those* circumstances *yes* you could retry getting the actual data... but it would require state-ful collaboration and cooperation between the web browser and the proxy in order to complete the transfers reliably. and my feeling is that that's just a bit too complicated to go into right now. maybe someone could do it as a proxy-proxy cache, where users would install one end of the proxy on localhost and the other end on some server, and the *proxies* would do continuous retries (and a host of other tricks) to make sure that they reliably obtained the files... that would be cool. but even if you were to use this type of stateful cooperation with a proxy, there is still *no way* that the "onload" behaviour of the actual web browser engine should be modified, nor should there be *any* expectation that just because files happen to take longer (or a long time) to download, that the web browser behaviour should be changed *in any way* to cater for that. web developers and web sites *really should* be designed *NEVER* to rely on the length of time that any given asynchronously-downloaded file takes to complete. EVER.
,
Jul 14 2017
It actually is our problem - if a site works with some broken resources, we're potentially breaking it by spending a potentially indefinite amount of time trying to load them successfully, after they first failed to load.
,
Jul 14 2017
> It actually is our problem - if a site works with some broken > resources, we're potentially breaking it by spending a potentially > indefinite amount of time trying to load them successfully, > after they first failed to load. you and i may have different ideas of the consequences of "indefinite amount of time" actually is. let's take an example. i'm evaluating a web site right now that has been designed with the f***-wittery known as "wordpress". it's... s**t. according to the network console it's a whopping SIX megabytes of crud, taking an amazing TWENTY FIVE seconds to load on a quad-core i7 system with 16 GIGABYTES of 2400mhz DDR4 RAM and an incredible 2500mbytes/sec NVME SSD. that's the front page.... on a taiwan broadband connection that is easily capable of 6 mbytes / sec download speed. so it *should*, in theory, be a 1 second download time, right? wrong - it's anywhere between 10 and 25 seconds. now, let's reduce the bandwidth to 10k/sec and work out the load time. that's six HUNDRED seconds. 600 seconds!!! that's a staggering ten MINUTES load time!!! now. should i, as the user, on a connection that is only 10k/sec, expect this to be fast?? NO. should i, as the user, expect the behaviour of the website to CHANGE?? *NO*. should i wait for the full 10 minutes for the site to load? *YES*. why should i wait for the full 10 minutes for the site to load? *because i don't have any choice in the matter*. does this illustrate to you why it is *not* a good idea to change the web site behaviour "just because something's slow"? the user operating on a ridiculously-slow internet connection should have the expectation that, *regardless* of the length of time, the site *should* eventually load. and if you start fricking about trying to second-guess that, just because *you* have different expectations of what's slow (intolerable / interminable / undefined), that's *not* okay!! basically the user - despite the slowness - *has* to be able to trust that, at some excruciatingly-long time in the future, the page *will* eventually load. the difference is illustrated clearly between these two examples: (1) *ACTUAL REAL WORLD SCENARIO* - as described in this bugreport. 20 retry attempts over TWO hours. that meant that i had to sit there, twiddling my thumbs, for the ENTIRE TWO HOURS, constantly, constantly and utterly insanely patiently checking the f****g site, and if it didn't work hitting refresh. (2) hypothetical scenario where network retry occurs (a) start page loading (b) come back in 10 minutes. (c) is it loaded, yes no. if no, go to step (b) (d) celebrate. scenario (2) is what i would *like* to have been able to do... because i could TRUST THE DAMN SOCKET LAYER TO FRICKING WELL DO ITS F*****G JOB. if i cant trust the f*****g network layer to f*****g well do its f*****g* job, then i am forced into the utterly insanely intolerable scenario (1) where it becomes absolutely necessary to hit retry, retry, retry, retry *MANUALLY*.... for 2 f*****g hours. long load times *are* actually tolerable... if you *know* that they're going to complete. i don't know how old you are, but i used to use 33kbaud modems... and was delighted when they were upgraded to 56k. so i know what to expect, and i know the difference between "reliable but slow" and "slow and utterly f*****d". chromium is, right now, falling into the latter category. summary: it's about trust. you have to be able to reassure the user that they can trust that, no matter how long it takes, the page *will* load... eventually. even if getting one 5mb image takes 8 minutes. as long as you don't change the behaviour of the website so that it's going to fail (because onload event behaviour was changed), they'll just leave it downloading. does that make it clear, now, that it *really* is not your problem how long any one given resource loads?
,
Jul 14 2017
#25: Please review Chromium's code of conduct. https://www.chromium.org/conduct Earlier comments like #19 and #21 seemed fine, but #25 crossed over a line. Please focus on channeling your desire for improvements in more constructive ways. Thanks.
,
Jul 14 2017
> #25: Please review Chromium's code of conduct. https://www.chromium.org/conduct sorry: i don't agree with "codes of conduct". they're well-understood to be dangerous and toxic documents. a "code of honour" on the other hand, that i *can* agree to. a good example would be the Titanian's Code of Honour. it summarises as "we always do good. we never do harm. the code applies 100% of the time; everyone knows the code". there's a very specific definition of "good" and "bad" which goes with that Code of Honour but i won't go into detail. > Earlier comments like #19 and #21 seemed fine, ... yet failed to get across to mmenke the importance of being able to trust that a browser will reliably load a page (even if it takes 10, 15, 20 minutes) > but #25 crossed over a line. no... it did not. i refrained in the original post from expressing what actually happened, which was that trying to gain access to my email using a tool - chromium - that is clearly not designed to cope under such conditions - *really did* cause me a huge amount of distress, during an extremely difficult and challenging time with a *lot* of risk. only when mmenke demonstrated that he really wasn't getting it did the *real* feelings of frustration and anguish caused by that time really start to come through. and, far from telling me that i have "crossed a line", you should be *grateful* cbentzel, for being made aware just quite how intolerable a user experience chromium really is under slow network conditions, and how much harm and distress it really caused me. now, the reason why i didn't initially express truly how much harm it caused me is because i am a long-time internet and software libre user and advocate who understands that this *is* software libre, you *don't* get to order people about, you *don't* get to tell them what to do, or think, or feel. a certain level of civility *is* expected... *but*... if people don't listen, or if it's taking a long time to work out the informational-differences in an exchange, particularly in a scenario that has caused deep distress by the total and complete failure of a piece of software to work as normal, you really cannot expect there to be no emphasis which does not, in some way, illustrate *really* how someone feels about that software and the harm that it caused, can you? i tried my best in the original report to mask the harm it caused: now you know. > Please focus on channeling your desire for improvements in more > constructive ways. Thanks. you've never tried to use chromium to do your job - never had your livelihood threatened - by an unexpected and unanticipated change from first-world over to slow internet connectivity, where you have to BREAK THE LAW and risk being jailed or imprisoned (in a communist-run country), just to try to read email and get access to the internet, have you? you've never had the police come round to your hotel, to inspect your passport because they're monitoring your phone's SMS messages and the city's CCTV cameras to track your position, have you? when you have, i believe you might be in a better position to empathise with this situation that i encountered. you are extremely lucky that i am honest enough to make you aware of the true level of harm and frustration that was caused. far from trying to dictate that i should read a toxic document, you should be really, *really* grateful that i've been so honest and forthright. it was *really* scary being in china, chentzel. it was necessar that i take my family with me: my partner and my eight year old daughter with me. if i and my partner had been arrested and jailed for using a VPN, i cannot begin to imagine what would have happened to her. so please - *don't* try to tell me i'm out of line, and that i should read some totally inappropriate list of "proscribed behaviours". i have absolutely no desire to have my mind poisoned by such lists. instead, please try to understand that you've been granted a rare opportunity to witness something extremely unusual: circumstances which, contrary to popular belief, the software that you're developing can actually cause significant harm and distress to users, and, more than that, a technical end-user who is also a software libre developer and advocate who is willing to help you to properly identify and fix the problem.
,
Jul 14 2017
Swearing rarely convinces anyone of anyone, other than it's not worth their time to listen to you, let alone try and help you.
,
Jul 14 2017
> Swearing rarely convinces anyone of anything, other than it's not worth > their time to listen to you, let alone try and help you. what makes you think that *i* need help? what makes you think it's about "me"? you're the ones being paid by google (i'm not) - you're the ones that are failing a huge proportion of the world's internet users. i'm just the person that's giving you some rare insight as to how to help them. now that i have a handle on the problem, as a technical end-user and software developer *i can help myself*. i can write a proxy (which i will be happy to release as software libre) which deals with the problem. *i* don't *need* you. *at all*. please try not to misunderstand this. take it as the wake-up call that it is, and learn from the experience - that *you* crossed a line by making me have to repeat myself to the point where it wore out my patience on a situation that had already caused me significant distress, that i did my best to hide from you. understand?
,
Jul 14 2017
@luke.leighton - I am sorry that you were in the situation you were, and that you had difficulty using Chromium. However, I believe you've gotten your message across as well as you're likely to do given your most recent posts, and you're not helping this bug move forward at this point. Please either try to return to focusing on the technical aspects of the discussion or take your comments elsewhere.
,
Aug 18 2017
dpranke, your understanding and empathy is appreciated: and yes, i agree, part of the point has been made. if we look closely at mmenke's responses you can see clearly that he's not really listening in a respectful way, which i didn't notice consciously enough to say anything about it, and was getting quite frustrated and angry with. his behaviour is highly unlikely to be covered by the toxic document. regarding your request: the damage has already been done by mmenke's disrespectful manner. if you don't understand, read it again carefully: do you see any acknowledgement of my points, just like you did, dpranke? a respectful and constructive contribution involves acknowledging the other person's contribution, usually by giving some form of summary that confirms that you went to the time and trouble of reading what they wrote. you did it, i did it... so why can't he?? and cbentzel tries to tell me i should blindly obey some toxic document instead of taking mmenke to task. mm? so i really don't feel like helping you - any of you paid google employees who make money from treating people badly - out any further - the damage is already done.
,
Sep 12 2017
,
Jul 1
https://bugs.chromium.org/p/chromium/issues/detail?id=313280 is not a duplicate of this issue. this issue specifically requires a test network that throttles packets to around 6-15k per second and randomly drops up to 80% of all TCP/IP packets. has this issue included a test network with such characteristics? |
|||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||
Comment 1 by luke.lei...@gmail.com
, Dec 18 2016