New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 689363 link

Starred by 9 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 2
Type: Bug



Sign in to add a comment

Enable tab discarding with memory coordinator

Project Member Reported by bashi@chromium.org, Feb 7 2017

Issue description

Currently tab discarding is triggered by MemoryPressureListener, but it is going to be replaced with MemoryCoordinatorClient. We need to make tab discarding workable without MemoryPressureListener.


 
Project Member

Comment 1 by bugdroid1@chromium.org, Feb 24 2017

Comment 2 by ctengc...@gmail.com, Feb 24 2017

I have a little question: TabManager::DiscardTab() should remember the
tab's url or even more serializable states, then if later user switch to
the tab if needed, can still recover the ui state!

2017-02-24 11:13 GMT+08:00 bugdro… via monorail <
monorail+v2.3275348242@chromium.org>:
Regarding comment #2:

The URL and scroll position/history/etc of the page is already remembered. Discarding and reselecting the tab is roughly equivalent to hitting "refresh" on a tab. Some local state may be lost, but the discarding mechanism avoids discarding pages with important local state (form entry, etc).

Preserving more state, while a noble goal, is definitely beyond the scope of this CL. This is simply plumbing an existing and enabled discarding mechanism into a new "front-end" for it. 

Comment 5 by willg...@gmail.com, Mar 14 2017

Glad I spotted this, I was just going to open an issue about nonfunctional tab discarding on my Chromebook.
Cc: cylee@chromium.org puneetster@chromium.org bccheng@chromium.org sonnyrao@chromium.org
Thank you Will G.

Just wondering, does this affect Chrome OS and was it tested on it?

Comment 7 by cylee@chromium.org, Mar 27 2017

Cc: georgesak@chromium.org
Hi,
Could somebody elaborate the change? We're especially wondering whether it would break the functionality of MemoryPressureMonitor on ChromeOS
https://cs.chromium.org/chromium/src/base/memory/memory_pressure_monitor_chromeos.cc
It listens to a specific system interface /dev/chromeos-low-mem .



Comment 8 by bashi@chromium.org, Mar 28 2017

Sorry for late response (I was OOO yesterday). I've been testing this on my ChromeOS device and trying to stabilize it but not yet finished.

As for listening /dev/chromeos-low-mem, we didn't realize we should check it (thanks for the info). I wrote a WIP CL to check it. Basically MemoryConditionObserver + MemoryMonitor is a replacement of MemoryPressureMonitor.

https://codereview.chromium.org/2783613002

Does it make sense?

Comment 9 by cylee@chromium.org, May 26 2017

Hi bashi,
  I've briefly read the CL. It looks reasonable, but
  1) IIRC the polling interval is DefaultMonitoringIntervalSeconds = 5 seconds. It would be way too long on ChromeOS (Currently MemoryPressureMonitor defaults to 1 second). It seems to be a configurable parameter, on ChromeOS we'll definitely need a shorter interval.
  2) We have proposed to change the busy polling in MemoryPressureMonitor to a block-waiting manner (i.e., select() on /dev/chromeos-low-mem until it returns) so memory pressure is reflected more timely. It doesn't look fit into the design of MemoryConditionObserver. What do you think about it ?

On the other hand, to make sure we're on the same page. Is my understanding correct ?
1. MemoryCoordinator is ready on production but disabled by default.
2. There're a Finch experiment to control the enabling gruop. All ChromeOS should be excluded from the experiment until (at least) the CL is checked in.
But from https://bugs.chromium.org/p/chromium/issues/detail?id=705185#c10 you said you only decreased the percentage of the enabled group. If so how would those enabled machine behaves? I don't think it a valid configuration.

Comment 10 by bashi@chromium.org, May 29 2017

Hi cylee@,

- DefaultMonitoringIntervalSeconds was 5 secs but now it's 1 sec.
- We are going to change how to calculate memory pressure levels[1]. MC needs some design changes so I wouldn't call it's production ready yet.
- The push-based approach sounds reasonable, but the signal may not be enough for us to do proactive memory management. For example, some people are proposing aggressive purging from background tabs before the system entering moderate/critical state. MC may define a custom calculation that enable us to do such proactive memory management.
- The finch experiments are only for Canary/Dev. Can't we run an experiments until a future is production ready?

[1] https://docs.google.com/document/d/1EhZgA8I4jjOnvY8Kpty291H0B4IGVVkOrFO84SRBbic/edit

There are a lot of discussions on how to define/handle memory pressure on chrome-memory@ internal mailing list (g/chrome-memory). My current understanding is that semantics of pressure signals from OSes are different on different platforms and just exposing it to other Chrome components wouldn't be a good idea. In MC world we are going to define our own pressure model.

Comment 11 by cylee@chromium.org, May 31 2017

So my question would be, since on ChromeOS it polls /dev/chromeos-low-mem to decide memory pressure. And the logic of /dev/chromeos-low-mem is different from your "generic calculation". Those users in experiment group would not respect to /dev/chromeos-low-mem before the CL. Am I right?  As I noted in some other document, currently tab discarder only respect to /dev/chromeos-low-mem but not the calculation logic in MemoryPressureMonitor.

There has been quite some effort in the past months to tune the threshold of /dev/chromeos-low-mem, and it's even device specific (different threshold on different ChromeOS machines). IMHO we'd better not to complicate things further  by introducing experiment on ChromeOS at this time. For example, when somebody reports tab discarder is not working again, we're not sure whether he's in this experiment or not.
Thank you for feedback on the doc. That's really helpful.

As for /dev/chromeos-low-mem, yes, we are going to check it.

I'm sorry about bad behavior of the experiment I'm running, but IMHO running an experiment on Canary/Dev with small amount of groups is what we should do to get data from the wild and stabilize implementation. I agree that we shouldn't complicate things further but on the other hand we want to have a consistent memory pressure model among platforms as the current mechanism is not ideal as described in [1]. I feel it's a bit unfortunate that we can't run an experiment on Canary/Dev to stabilize/iterate implementation.

[1] https://docs.google.com/document/d/1dkUXXmpJk7xBUeQM-olBpTHJ2MXamDgY_kjNrl9JXMs/edit#heading=h.8dfv8if22w6k
I was reading the document the I suddenly lost permission to view it. Will read it through later. But I doubt a generic memory pressure calculator would work for different ChromeOS models. For example, different devices may have different swap rate, and user may experience different level of sluggish under the same system loading. Recent efforts moves more memory related parameters (which controls /dev/chromeos-low-mem) to be tunable per device. That's the opposite way of what a generic calculator is trying to do.

On the other hand, I'd like to at least being able to identify whether a machine is in your Finch experiment. For example, do you know how can I know whether one is in the experiment from feedback like the one `in https://bugs.chromium.org/p/chromium/issues/detail?id=705185#c13 ?
If ChromeOS is the only platform we support we don't have to generalize memory pressure. The problem is that tab discarding and other actions like purging are platform agnostic features but concepts of memory pressure vary from platform to platform. For example, OS-provided signal doesn't seem to be working on macOS[1]. I think we need an abstraction layer which enables us to have a consistent set of policies on how/when to discard tabs etc. That's why I want to have custom definitions of memory pressure. Given the feedback you provided, we will use chromeos-low-mem first to calculate our custom memory pressure level (maybe just map it to CRITICAL condition).

As for investigating regressions, I think we can use an internal tool. I will send it later.

[1] crbug.com/713463

Comment 15 by bashi@chromium.org, Jun 19 2017

JFYI, I suspended the experiment because design of memory coordinator has been changing multiple times and keeping the experiment doesn't make sense for now. Sorry for the trouble.

Comment 16 by willg...@gmail.com, Jun 19 2017

So what's the status for cros? Will tabs crash when zram is exhausted or will they discard?
re #16 - we will continue to try and discard but at some point yes you'll get a kernel OOM kill  I've seen both things happen when zram is exhausted.

Sign in to add a comment