memory_pressure_monitor_win.cc needs to account for commit limit
Reported by
chris....@gmail.com,
Dec 13 2017
|
|||||||
Issue descriptionUserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.97 Safari/537.36 Vivaldi/1.94.1008.36 Steps to reproduce the problem: 1. Limit Windows Page File Size (e.g. 256-2048) 2. Start Chrome and use a few windows with many tabs 3. Start another program that uses a single process and around 25% memory. What is the expected behavior? Chrome should detect the high memory pressure and start discarding tabs. What went wrong? Chrome continues happily hogging memory because physical memory has around > 1G available. Meanwhile, the commit limit is almost entirely consumed so Windows complains and suggests killing the single process program as the highest memory user (closely followed by the main browser process). Did this work before? N/A Chrome version: 62.0.3202.94 Channel: stable OS Version: 7 Flash Version: 28.0.0.126 I didn't check to see if this ever worked before, but I suspect it never has. I would suggest that CalculateCurrentPressureLevel on Windows needs to set moderate pressure when the commit charge reaches ~65% of the commit limit and should probably set critical ~90%. Looking at the physical memory usage and checking for a particular number of bytes available doesn't seem to make any sense, but if there's some reason to keep that behavior it should happen after the commit charge (level) / limit comparisons in my opinion.
,
Dec 17 2017
,
Dec 17 2017
This patch probably isn't perfect, but it covers my suggested changes.
,
Dec 29 2017
This is out of TE scope for triaging. Tenatatively adding Blink>MemoryAllocator label. Could some one from the team help in triaging this further. Thanks!
,
Jan 19 2018
It's been a few weeks... Who can we ask to triage this to get some forward progress?
,
Feb 4 2018
cc-ing some people who've touched/reviewed these files for thoughts on the proposal.
,
Feb 5 2018
I have mixed feelings about this. Limiting the windows page file size is dangerous and should only be done if you can guarantee that the system will never run out. That is, doesn't it only make sense on machines which have plenty of memory? That said, if Chrome's memory pressure handling works well enough then adding extra conditions for triggering the memory pressure signals seems reasonable. Some questions: 1) Does the memory pressure handling work well enough to make us willing to trigger it in some new situations? 2) Is 65% the right commit-charge percentage for triggering moderate pressure? That seems low to me but I'm not familiar with the memory pressure handling. 3) Will having two possible signals for memory pressure complicated the calculations for when memory pressure has been relieved?
,
Feb 5 2018
The page file is always limited (eventually) it's just a matter of when Chrome will reach that limit right now. I set the limit lower in the reproduction steps to make the problem easier to spot. > System managed paging files will increase up to three times physical memory or 4 GB whichever is larger. The problem is the current memory pressure code isn't being triggered before the commit charge reaches the commit limit. Even on a VM with 4G of RAM and my lowered page file limit, opening enough tabs shows the system keeping > 400 MB of physical memory available. > // These are the default thresholds used for systems with >= ~2GB of physical > // memory. Such systems have been observed to always maintain ~300MB of > // available memory, paging until that is the case. > const int MemoryPressureMonitor::kLargeMemoryDefaultModerateThresholdMb = 1000; > const int MemoryPressureMonitor::kLargeMemoryDefaultCriticalThresholdMb = 400; I chose 65% as a reasonable buffer before Windows starts to take actions with the page file. > If the "\Memory\% Committed Bytes In use" performance counter is over 75%, then the system is close to running out of memory (both RAM and all page files). As for complicating the calculations, the only place they are calculated (as far as I know) is the CalculateCurrentPressureLevel function that's being modified by these changes. The level is either NONE, MODERATE, or CRITICAL. The way to get out of the critical state would be to lower the commit percent below 90%, commit free > 400 MB and physical free > 400 MB. Similarly for moderate state it's > 1000 MB free for commit and physical and the commit percent below 65%, to reach the no pressure state. I'm of the opinion that removing either or both 400 MB conditions would be fine. Checking on available physical memory on Windows doesn't make very much sense to me.
,
Feb 5 2018
,
Feb 5 2018
,
Feb 5 2018
More useful information on the commit limit is at: https://blogs.technet.microsoft.com/markrussinovich/2008/11/17/pushing-the-limits-of-windows-virtual-memory/
,
Feb 6 2018
> opening enough tabs shows the system keeping > 400 MB of physical memory available. Yeah, it's a good point that using available memory to detect memory pressure is a bit crazy because Windows always tries to ensure that there is "enough" physical RAM available. Here's a counter example: imagine that Chrome is running on a system with also has another program running that has a memory leak. This program leaks a MB every few minutes, or ~500 MB a day. With Chrome ignoring commit levels the result is that Windows will occasionally increase the size of the page file to compensate, and life is basically okay. If Chrome starts discarding tabs because the commit level of the system is high then the page file will stay the same size until Chrome has discarded every tab that it can. It may be that the risk of not discarding enough tabs is higher than the risk of discarding too many - I don't know. I also don't know how often users actually get the "Your system is low on virtual memory" dialog. I really wish there was a sane way to solve this, I just can't think of one.
,
Feb 6 2018
In your leaking program example, doesn't the fact that the leaking program uses up the commit limit cause problems in Chrome when it tries to get more memory? I think the trade off here is when Chrome should discard the memory it can. Should that happen while the system is still (relatively) healthy or should it be deferred until a much later time, if it happens at all. My vote goes to trying to discard memory before the page file changes. It may not be a sane solution, but in my experience as a Chrome user I'd prefer being able to open a new active tab to keeping an old background tab. I typically don't have run away leaking programs, but I do have plenty of chrome.exe processes and I want the tab I just switched focus to, to win when competing with one that was in focus yesterday.
,
Feb 6 2018
+wez@ and haraken@ FYI. I've worked on trying to replace the memory pressure signal in Chrome because we know that the current approach (looking at the available physical memory) doesn't work well in practice. I've been looking at measuring Chrome's hard page fault rate instead and during my experiments I've realized that we rarely reached the critical memory pressure state when Chrome was under heavy memory pressure (i.e. swap-thrashing a lot). This work is tracked in crbug.com/771478 . This work is currently on hold because there's no "cheap" way to measure the hard page-fault rate (on Windows at least), the only real way to get this is via the "NtQuerySystemInformation" function, but it's quite an expensive call. The next step for my work (once I get back to it) is to run different test cases on some low-end hardware and look at a bunch of memory-related signals, there's probably some indirect signals we could use to detect that the system is under memory pressure but it's not clear what these are yet.
,
Feb 6 2018
> In your leaking program example, doesn't the fact that the leaking > program uses up the commit limit cause problems in Chrome when it > tries to get more memory? In normal circumstances I think that Windows will expand the page file if a program slowly leaks commit. That is, in the same way that Windows tries to make sure that there is always available RAM, Windows tries to make sure that there is always available commit. Ultimately what we really want is for Windows to tell us when it is taking measures to increase available RAM (paging out private data) or available commit (increasing the size of the pagefile) so that we can react. It is unfortunate that this seems to be difficult or impossible.
,
Feb 6 2018
> Ultimately what we really want is for Windows to tell us when it is taking measures > to increase available RAM (paging out private data) or available commit > (increasing the size of the pagefile) so that we can react. I know that getting the paging data has been deemed too expensive, but storing the commit limit at browser startup and looking for changes in that value should be cheap and easy. |
|||||||
►
Sign in to add a comment |
|||||||
Comment 1 by chris....@gmail.com
, Dec 13 2017