Issue metadata
Sign in to add a comment
|
SharedMemoryFootprint is high at 99th percentile, potentially due to outstanding network requests. |
||||||||||||||||||||||||
Issue descriptionmacOS: 189MB Windows: 109MB Android: 31MB macOS: https://uma.googleplex.com/p/chrome/histograms/?endDate=20180206&dayCount=1&histograms=Memory.Browser.PrivateMemoryFootprint%2CMemory.Browser.SharedMemoryFootprint%2CNet.ResourceDispatcherHost.OutstandingRequests.Total&fixupData=true&showMax=true&analysis=0.25%200.5%200.75%200.95%200.99%200.995&filters=platform%2Ceq%2CM%2Cchannel%2Ceq%2C1%2Cisofficial%2Ceq%2CTrue&implicitFilters=isofficial android: https://uma.googleplex.com/p/chrome/histograms/?endDate=20180206&dayCount=1&histograms=Memory.Browser.PrivateMemoryFootprint%2CMemory.Browser.SharedMemoryFootprint%2CNet.ResourceDispatcherHost.OutstandingRequests.Total&fixupData=true&showMax=true&filters=platform%2Ceq%2CA%2Cchannel%2Ceq%2C1%2Cisofficial%2Ceq%2CTrue&implicitFilters=isofficial Windows: https://uma.googleplex.com/p/chrome/histograms/?endDate=20180206&dayCount=1&histograms=Memory.Browser.PrivateMemoryFootprint%2CMemory.Browser.SharedMemoryFootprint%2CNet.ResourceDispatcherHost.OutstandingRequests.Total&fixupData=true&showMax=true&filters=platform%2Ceq%2CW%2Cchannel%2Ceq%2C1%2Cisofficial%2Ceq%2CTrue&implicitFilters=isofficial Note that SharedMemoryFootprint only measured resident shared memory. IIRC, one of the main sources is network request buffers, see https://bugs.chromium.org/p/chromium/issues/detail?id=713775#c6 99th percentile of Net.ResourceDispatcherHost.OutstandingRequests.Total also appears quite high. macOS: 273 Windows: 335 Android: 111 Seems like a potential source of memory bloat.
,
Mar 9 2018
More details can be found in the merged bug 622363 . This is a known issue. We decided not to fix/optimize the 512k buffer of shared memory used in AsyncResourceHandler, because it's going away very soon. In r526687, I tried to break down the types of outstanding requests using network traffic annotation, but the UMA histograms didn't give us anything useful. Most of the requests are coming from the renderer. There are also things like Issue 719498, but there's very little we can do.
,
Mar 9 2018
Note that AsyncResourceHandler has already gone away.
,
Mar 9 2018
I think there's another problem here, which is that there are hundreds of outstanding network requests. Is that an intentionally supported edge case?
,
Mar 9 2018
It's difficult to set unspecified hard-fail behaviors for what websites can and cannot do. We limit requests per-RenderFrame based on initial memory size requirements (Mostly aggregate header size) in RDH, but that's about it.
,
Mar 9 2018
What are those limits per renderer? How can we verify that those limits are being adhered to {e.g. misbehaving websites} vs. chrome is doing something wrong {misbehaving chrome}?
,
Mar 9 2018
There are per RenderFrame, not per renderer. Looks like they're 25 MB - https://cs.chromium.org/chromium/src/content/browser/loader/resource_dispatcher_host_impl.cc?q=ResourceDispatcherHostimpl&sq=package:chromium&l=148 They aren't meant as something to break the average over-aggressive website, they're there as more of an exceptional case last ditch effort to avoid OOMing the browser process, I believe. ResourceScheduler also has some logic throttles the number of requests that can actually be started, also per-RenderFrame.
,
Mar 9 2018
Can you please clarify the limits from the comment? """ // Maximum byte "cost" of all the outstanding requests for a renderer. // See declaration of |max_outstanding_requests_cost_per_process_| for details. // This bound is 25MB, which allows for around 6000 outstanding requests. const int kMaxOutstandingRequestsCostPerProcess = 26214400; """ I'm assuming everything that says renderer/process should be replaced with RenderFrame. The comment indicates 6000 outstanding requests for 25MB, but we've observing ~100-200MB for 100-300 requests.
,
Mar 9 2018
That comment is probably 8 years old. And as I said, it mostly just measures the size of the headers, and pre-dates the site isolation stuff.
,
Mar 9 2018
Chatted with mmenke, xunjieli. Going to undupe this bug. It seems like we should: 1) Confirm whether it's outstanding network requests causing this bloat in shared memory 2) If so...something odd is happening. There are up to 4 buffers used in a network request, but the only one that uses shared memory is the mojo buffer used to transfer the data to the renderer, which shouldn't live that long.
,
Mar 9 2018
Maks pointed to SimpleCache.Http.GlobalOpenEntryCount, which should have an entry for most network requests [except those skipping cache]. It also can count things other than HTTP requests [cache storage] 99th percentile: macOS: 166 Windows: 130 Android: 66 https://uma.googleplex.com/p/chrome/histograms/?endDate=20180307&dayCount=1&histograms=SimpleCache.Http.GlobalOpenEntryCount&fixupData=true&showMax=true&analysis=0.99&filters=platform%2Ceq%2CA%2Cchannel%2Ceq%2C1%2Cisofficial%2Ceq%2CTrue&implicitFilters=isofficial About half the size of Net.ResourceDispatcherHost.OutstandingRequests.Total.
,
Mar 9 2018
I think the next follow up step should be to expand the net MDP to count the size of resident memory used by various net buffers [especially shared memory used by Mojo buffer]. This will let us confirm whether the net stack has anything to do with this bloat in SMF. ssid, xunjieli: Is this a task either of you would be interested in picking up?
,
Mar 9 2018
Sorry, looks like I was wrong - it is per process. ResourceScheduler limits requests per RenderFrame, but kMaxOutstandingRequestsCostPerProcess is, in fact, per renderer process.
,
Mar 9 2018
Re#12. We already have in //net MDP various network buffers that we used. https://cs.chromium.org/search/?q=estimateMemoryUsage+buffer+file:net/&sq=package:chromium&type=cs Note that there are no shared memory usage within //net. I am not familiar enough with the new mojo loading path to know how their shared memory is different from that of AsyncResourceHandler. Examples are: 1. spdy_session.cc's read buffer: https://cs.chromium.org/chromium/src/net/spdy/chromium/spdy_session.cc?rcl=e11d27a95740506d96c8779c01a40cf887a4813e&l=1432 2. socket read/write buffer: https://cs.chromium.org/chromium/src/net/socket/ssl_client_socket_impl.cc?rcl=e11d27a95740506d96c8779c01a40cf887a4813e&l=756 3. quic buffer: https://cs.chromium.org/chromium/src/net/quic/chromium/quic_chromium_client_session.cc?rcl=e11d27a95740506d96c8779c01a40cf887a4813e&l=2789
,
Mar 14 2018
Network stack team is just going to sit on this one for now. |
|||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||
Comment 1 by ckrasic@chromium.org
, Mar 9 2018