API to access live quota usage information from the DevTools |
||||||
Issue descriptionDevTools have numerous feature requests from developers trying to use the storage and better understand the quota. At this time we found no way to obtain following information important to our users: 1. Overall quota size. https://groups.google.com/a/chromium.org/d/topic/blink-dev/frMdM1H8jJ8/discussion may be able to help with this. 2. Real-time usage and remaining quota per storage type. Users need a way to understand how quota is used and how much more data can be fit into the storage. Our experiments with the storage shown that current APIs do not provide actual information (delays of up to 1 min were observed) and also do not provide user-actionable information (storage was allocated in chunks, writing new data may not be reflected on the usage unless a new chunk has been allocated, removing data from the database does not result in freed quota). 3. Users need a way to simulate quota pressure for debugging how their site handles it.
,
May 22 2017
See QuotaManager.settings() for the overall pool size and the amount an individual site can consume. Also, devtools chould change those settings, see SetQuotaSettings, although that would affect storage for all sites and could cause lots of eviction (depending). QuotaManager.GetUsageAndQuotaForWebApps() provides total usage of an origin across all types, but there is no way to break that down into how much is IDB vs CacheStorage. We could add that info to the return value of GetUsageAndQuotaForWebApps. About 3, did you have anything in particular in mind? Maybe a function like SetQuotaForOrigin(origin, quota) so develpers could artificially decrease the amount allotted to the site their working on.
,
May 22 2017
* This could be due to lazy operations (e.g. background compaction); or a bug - do you have a repro? I believe it was most obvious with the Local Storage and it did not seem like a bug, just a lazy operation or something. Can DevTools somehow "flush" all such pending writes? * Correct... that's how storage works in Chrome. If the desire is for Chrome not to behave that way, that's a different feature request than an API to expose live quota usage. Exposing this to web developers does not help them. When a one byte write adds 4k (or 8k, don't remember) of usage and subsequent 2k wright does not move the needle they will only start questioning the sanity of the DevTools. Removing 2.1kb afterwards will not be reflected in the DevTools as well - which would only further confuse the users. Is there a way to expose how much free space remains in the chunk? This FR is not about changing the Chrome behavior but about providing actual and actionable information to the Web Developers.
,
May 22 2017
* About 3, did you have anything in particular in mind? Maybe a function like SetQuotaForOrigin(origin, quota) so develpers could artificially decrease the amount allotted to the site their working on. That should be enough. Developers need to know how their web app will fare on devices that have limited amount of storage.
,
May 22 2017
> Can DevTools somehow "flush" all such pending writes? see DOMStorageContextWrapper::Flush()
,
May 24 2017
> (Usage per storage type is a valid concept, of course, and something we should calculate and expose for devtools) This would solve most usecases. Can this be done so it returns the data in user-consumable way? Up-to-date values that reflect the web application (and not the browser) view on the data. E.g. empty database should display 0 and not a size of the empty chunk.
,
May 26 2017
about precision in usage tracking... The quota system doesn't depend on high accuracy in its usage tracking. There's a laziness in its eventual cleanup, and the more important trigger for eviction (low disk space) does't rely on precise usage tracking. Chromium's content code is one step removed from the raw files on disk. We use various backends that manage the files and the layout of data in them. The reported usage values are (usually) based on the size of the resulting files. This is a good reflection of actual usage on disk. We don't have any plans to tease the size application data out of database file formatting.
,
May 30 2017
Since quota management is not trivial / transparent (1 byte of user data does not project to 1 byte of quota use), for DevTools to be helpful, we need to surface both numbers: actual user payload size as well as the the quota size as quota manager sees it. Could you expose both? We would then figure out the UX to expose it to the user.
,
May 30 2017
My intuition is that we should do what the issue title suggests -- expose the information used by the quota system to developers in DevTools, as it is. I think there's value in giving developers an intuition for how the underlying stores work -- for example, if you delete a value, you're not guaranteed to get your quota back immediately. So, I don't think we should pretend quota works in a way in which it doesn't. There may be an argument that showing a total database size can help find & fix bugs in applications. If that's the case, I think that argument is separate from the quota system, and should be discussed as such. My thoughts on it are below. Exposing user payload size would either require tracking it, or traversing the entire store to read it. The former is a non-trivial code and perf tax that all users would have to pay. The latter might be prohibitively expensive. For example, getting user payload for IndexedDB would require a global lock (like a versionchange transaction) and, depending on how precise you'd want to be, shuttling and de-serializing every database value from the browser to the renderer. DOMStorage (local / session storage) isn't supposed to be used for storing large amounts of data, so the full traversal solution might be more acceptable. OTOH, I'm not sure that having more accurate quota for localStorage / sessionStorage gives developers the correct incentives -- developers should use IndexedDB, because DOMStorage is a synchronous API for doing I/O, which is terrible. Regarding "flushing" pending writes on DOMStorage, we'd still not get accurate usage information. At the very least, SQLite needs a vacuum to release unused pages. I'm mentioning this to convey the idea that making quota behave more intuitively would be a highly non-trivial eng effort.
,
May 30 2017
,
May 30 2017
,
May 31 2017
> Could you expose both?
We're not tracking the payload size as a separate piece and doing so would be a significant amount of work. A 'sizeOnDisk' vs 'logicalSize' feature request makes sense, but I think we could take that on separately. For this round of work I think we should focus on exposing the info that we do track.
We don't yet expose the breakdown by storageapi, but we are tracking that, I think we could expose it with a change to the QuotaManager API along these lines...
typedef base::Callback<
void(QuotaStatusCode, int64_t /* usage */, int64_t /* quota */,
std::vector<std::tuple<QuotaClient::ID, int64_t>>)>
UsageAndQuotaCallback;
,
May 31 2017
>> For this round of work I think we should focus on exposing the info that we do track. We can start with it, but it ends up with confusion and questions from the developers. "I added this much, but the tool is showing a different number", etc. So we should focus on both for clarity right away. Until this is done, we would need to surface a big disclaimer explaining the origin of the number we surface. Or maybe even involve DevRel to have a one pager explaining it online.
,
May 31 2017
I'm afraid I wasn't clear earlier. As far as the quota system is concerned, the user payload is irrelevant. The backing stores that we use (LevelDB and SQLite) have blocks / pages that user data is routed into. Disk usage is based on the number of blocks / pages used by the backing store. We want to avoid getting the user's disk full, so quota decisions are made based on the number of blocks / pages that your backing store ends up using. More concretely, if your application stores 64KB of data in a backing store that ends up creating a 2MB file, you'll be charged a 2MB quota. What you store is irrelevant, what matters is that you're using 2MB of disk space. If you have a 2MB quota, we won't allow you to write any more data. Based on what I've said above, I think that "payload size" would not be a very actionable metric for developers. Like I said above, there's little solace in knowing that you've only written 64KB of data. What you care about is that you're using 2MB of disk space. AFAIK, the quota APIs were carefully designed to deal with this issue. On top of that, "payload size" has the following technical issues: 1) Our backing stores don't expose a payload size, so we'd have to track it ourselves. This requires a non-trivial amount of engineering effort, and I can't think of an implementation where the payload size counters don't become a hot spot. This would be hard to justify, as we'd still want to base quota decisions on disk usage, for the reasons outlined above. 2) Defining "payload size" is not straightforward. For the simple example of DOMStorage -- would we use key size + value size? Is there no overhead for empty keys / values? Do we use UTF8 or JavaScript (~UTF16/UCS2) sizes? IndexedDB is more complex, as it supports a fair number of key types [1], and all values handled by the structured clone algorithm [2]. So, I think we'll need to do the DevRel work of explaining that while quota usage is generally related to how much data we're storing for an application, databases need to make some compromises to stay efficient, such as deferred garbage collection / compaction. OTOH, what we can promise is: 1) Quota usage will not go up unless an application writes new data, though it is possible that writes don't end up using more quota. 2) Quota usage may go down when an application deletes data. It is possible that the quota usage decrease happens much later than the delete operation, and some deletes may end up not modifying the quota. I hope this helps understand the quota situation. [1] https://w3c.github.io/IndexedDB/#key-construct [2] https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm
,
Jun 2 2017
,
Dec 11 2017
,
Dec 13 2017
|
||||||
►
Sign in to add a comment |
||||||
Comment 1 by jsb...@chromium.org
, May 22 2017