Leveldb: Shared block cache retains memory after closing database |
||
Issue descriptionIf a website spams the indexeddb database with large amounts of data transactions, the web block cache can grow to large sizes. crash/0526cf0f66ecf71f#6 shows 50MB of web block cache. Since the block cache is shared and other websites keep the local or session storage dbs, the block cache is never really cleared. This memory stays up for the rest of browsing session even after the indexeddb database was closed / deleted by the website.
,
Feb 20 2018
https://cs.chromium.org/chromium/src/third_party/leveldatabase/src/db/db_impl.cc?l=172 Yup, we don't do anything to release things from a shared cache. I'll see how hard it'd be to fix that.
,
Feb 20 2018
We do prune the cache when we get a memory pressure event. https://cs.chromium.org/chromium/src/third_party/leveldatabase/leveldb_chrome.cc?sq=package:chromium&dr&l=59
,
Feb 21 2018
I think that the easiest way to achieve pruning would be to add a destructor to leveldb::Table::Rep that iterates over the block cache and deletes entries belonging to that table. Details: Table instances are owned by a leveldb::TableCache whose values are leveldb::TableAndFile structs. The real Table data is in leveldb::Table::Rep, which has a cache_id, and options.block_cache. So, we'd have to add a Cache that deletes all keys with a given prefix, then use the Fixed64-encoded version of the table's cache_id as the prefix. The change doesn't strike me as terribly difficult to implement, but it would regress leveldb performance. The whole iteration has to happen under a mutex, and has to go through memory proportional to the block cache size. Seems like a non-trivial price to pay for other leveldb users who are fine with the cache usage hovering at some level even after they close databases.
,
Nov 7
This regression has been open for half a year. It's not very actionable and the regression has been in all Chrome user's hands for months. |
||
►
Sign in to add a comment |
||
Comment 1 by jsb...@chromium.org
, Feb 20 2018