Memory spike ~1.5MB in Memory.Browser.Large2 on Android Canary: data reduction proxy database |
|||||||||||||
Issue descriptionhttps://uma.googleplex.com/timeline_v2?sid=79dfbc60781135f3830d6f3e253e5d1e Looks like the spike started at version 59.0.3065.0 and doesn't show in 59.0.3063.0. Dev channel is still at 3062, so the spike doesn't show there yet, but we may be able to confirm it there after this week's build goes out. Sustained upward trajectory without same trajectory on event counts makes me think this might be a real regression. I haven't looked at perf waterfall yet to see if there's a matching rise there.
,
Apr 12 2017
,
Apr 12 2017
,
Apr 12 2017
components/data_reduction_proxy/core/browser allocates upto 2MB of memory in the new version of Chrome: 59.0.3065.0, even if data saver is disabled. This did not happen in 59.0.3062.0. The related changes in data reduction proxy are: commit 5e5d1c1764d4ebbad4f51ab37f1d9341e6d26efb Author: megjablon <megjablon@chromium.org> Date: Thu Apr 6 16:55:11 2017 -0700 Create feature and enable data collection for Data Saver site breakdown Add a feature to enable the new settings page with site breakdown. Also, add support so that when Data Saver is enabled on Android, compression stats collects site information. Don't collect site info when Data Saver is disabled, and when data usage reporting is disabled (when the proxy is turned off) keep the stats. The stats can be manually deleted using a button on the settings page. BUG=615560 Review-Url: https://codereview.chromium.org/2791563002 Cr-Commit-Position: refs/heads/master@{#462691} commit 46aa9d9db9c0581937a96ced13747039fffe943c Author: dougarnett <dougarnett@chromium.org> Date: Thu Apr 6 14:45:19 2017 -0700 Removes the chrome-proxy-ect header from M59. This reverts sending the ECT header to the proxy server. Note: doing this with fresh CL seems cleaner that backing out the original 3 CLs - especially because we intend to revert this CL to put back the ECT header after the M59 branch is cut. [This is an alternative to gating behind a Feature in cl 2802463002] BUG= 707981 Review-Url: https://codereview.chromium.org/2807453002 Cr-Commit-Position: refs/heads/master@{#462632}
,
Apr 12 2017
,
Apr 12 2017
,
Apr 13 2017
Every time I close and open chrome the memory allocated by data reduction proxy component increases by 200KB. So, first time it was 200, then 450, then 700, then 900, then 1.2MB On release builds on Android we can only get which component allocated memory: components/data_reduction_proxy/core/browser. Maybe I can try to reproduce on local build if needed. Marking release blocker since this might really cause huge regressions on Dev channel where next update will be a week later.
,
Apr 13 2017
So, i just opened and closed chrome 10 times. It uses 3.7MB in the trace attached.
,
Apr 13 2017
@ssid, I'm still having trouble understanding these traces. I'm loading the attached one, then where can I see this 3.7 MB that's being allocated by the DRP component?
,
Apr 13 2017
Ah, never mind. I found it under object type!
,
Apr 13 2017
There's an issue with the database code when we clear it that causes this. https://codereview.chromium.org/2810393002 is a temporary fix to cover users that don't have the new site breakdown feature enabled. However, I'm finding that calling DataUsageStore::DeleteHistoricalDataUsage() from the new data saver UI causes this exact same bug, so I'll need to dig into this more. @ssid do you want to try this patch and verify that it works? Otherwise, I can just land it and we can check the UMA.
,
Apr 13 2017
Yup, something is going on with levelDB log file. I closed / opened Chrome several times and the only thing that grew is the log file: src$ adb shell ls -l /data/data/com.google.android.apps.chrome/app_chrome/Default/data_reduction_proxy_leveldb total 2648 -rw------- 1 u0_a191 u0_a191 2168147 2017-04-13 05:40 000004.log -rw------- 1 u0_a191 u0_a191 522070 2017-04-12 11:35 000005.ldb -rw------- 1 u0_a191 u0_a191 16 2017-04-11 11:25 CURRENT -rw------- 1 u0_a191 u0_a191 0 2017-04-11 11:25 LOCK -rw------- 1 u0_a191 u0_a191 353 2017-04-13 05:40 LOG -rw------- 1 u0_a191 u0_a191 353 2017-04-13 02:15 LOG.old -rw------- 1 u0_a191 u0_a191 123 2017-04-12 11:35 MANIFEST-000001 src$ adb shell ls -l /data/data/com.google.android.apps.chrome/app_chrome/Default/data_reduction_proxy_leveldb total 2888 -rw------- 1 u0_a191 u0_a191 2414807 2017-04-13 10:26 000004.log -rw------- 1 u0_a191 u0_a191 522070 2017-04-12 11:35 000005.ldb -rw------- 1 u0_a191 u0_a191 16 2017-04-11 11:25 CURRENT -rw------- 1 u0_a191 u0_a191 0 2017-04-11 11:25 LOCK -rw------- 1 u0_a191 u0_a191 350 2017-04-13 10:26 LOG -rw------- 1 u0_a191 u0_a191 353 2017-04-13 05:40 LOG.old -rw------- 1 u0_a191 u0_a191 123 2017-04-12 11:35 MANIFEST-000001 src$ adb shell ls -l /data/data/com.google.android.apps.chrome/app_chrome/Default/data_reduction_proxy_leveldb total 3128 -rw------- 1 u0_a191 u0_a191 2661473 2017-04-13 10:27 000004.log -rw------- 1 u0_a191 u0_a191 522070 2017-04-12 11:35 000005.ldb -rw------- 1 u0_a191 u0_a191 16 2017-04-11 11:25 CURRENT -rw------- 1 u0_a191 u0_a191 0 2017-04-11 11:25 LOCK -rw------- 1 u0_a191 u0_a191 350 2017-04-13 10:27 LOG -rw------- 1 u0_a191 u0_a191 350 2017-04-13 10:26 LOG.old -rw------- 1 u0_a191 u0_a191 123 2017-04-12 11:35 MANIFEST-000001 The log file directly contributes to the memory usage, as native heap profiler shows: 3,580.2 KiB ↳ƒdata_reduction_proxy::DataStoreImpl::InitializeOnDBThread() 3,579.4 KiB ↳ƒdata_reduction_proxy::DataStoreImpl::OpenDB() 3,579.0 KiB ↳ƒleveldb::DB::Open(leveldb::Options const&, std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char> > const&, leveldb::DB**) 3,572.4 KiB ↳ƒleveldb::DBImpl::Recover(leveldb::VersionEdit*, bool*) 3,572.0 KiB ↳ƒleveldb::DBImpl::RecoverLogFile(unsigned long long, bool, bool*, leveldb::VersionEdit*, unsigned long long*) 3,568.0 KiB ↳ƒleveldb::WriteBatchInternal::InsertInto(leveldb::WriteBatch const*, leveldb::MemTable*) 3,568.0 KiB ↳ƒleveldb::WriteBatch::Iterate(leveldb::WriteBatch::Handler*) const 3,568.0 KiB ↳ƒDelete 3,568.0 KiB ↳ƒleveldb::MemTable::Add(unsigned long long, leveldb::ValueType, leveldb::Slice const&, leveldb::Slice const&) 2,652.0 KiB ↳ƒleveldb::Arena::AllocateFallback(unsigned int) 2,652.0 KiB ↳ƒleveldb::Arena::AllocateNewBlock(unsigned int) 2,648.0 KiB ↳ƒoperator new[](unsigned int, std::nothrow_t const&) I.e. on each open levelDB replays log, which consists mostly of 'Delete' entries. I wonder if there is a way to 'compact' levelDB, so that it applies all the deletions and removes them from the log.
,
Apr 13 2017
,
Apr 13 2017
Some more info: the culprit is options.reuse_logs = true. I modified ldb (https://github.com/0x00A/ldb) to delete same 5760 "data_usage_bucket:%d" keys as DataUsageStore::DeleteHistoricalDataUsage() does. Calling that in a loop on an empty db results in the following changes in .log and .ldb files: -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 493240 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 739859 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 986480 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 1233099 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 1479720 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 1726339 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 1972960 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 2219572 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 2466198 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 2712814 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 2959440 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 3206056 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 3452680 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 3526704 Apr 13 13:40 000003.log -rw-r--r-- 1 dskiba eng 172569 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 895134 Apr 13 13:40 000006.ldb -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 419188 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 665809 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 912422 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 1159048 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 1405664 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 1652289 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 1898906 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 2145529 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 2392150 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 2638769 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 2885390 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 3132009 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 3378630 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 3527790 Apr 13 13:40 000005.log -rw-r--r-- 1 dskiba eng 97442 Apr 13 13:40 000007.log -rw-r--r-- 1 dskiba eng 541763 Apr 13 13:40 000008.ldb -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 344063 Apr 13 13:40 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 590684 Apr 13 13:40 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 837303 Apr 13 13:40 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 1083924 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 1330543 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 1577164 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 1823783 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 2070404 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 2317023 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 2563644 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 2810263 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 3056882 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 3303501 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 3527799 Apr 13 13:41 000007.log -rw-r--r-- 1 dskiba eng 22317 Apr 13 13:41 000009.log -rw-r--r-- 1 dskiba eng 90283 Apr 13 13:41 000010.ldb -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 986769 Apr 13 13:41 000009.ldb -rw-r--r-- 1 dskiba eng 268938 Apr 13 13:41 000009.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 986769 Apr 13 13:41 000009.ldb -rw-r--r-- 1 dskiba eng 515552 Apr 13 13:41 000009.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 986769 Apr 13 13:41 000009.ldb -rw-r--r-- 1 dskiba eng 762178 Apr 13 13:41 000009.log -rw-r--r-- 1 dskiba eng 987126 Apr 13 13:40 000005.ldb -rw-r--r-- 1 dskiba eng 986802 Apr 13 13:40 000007.ldb -rw-r--r-- 1 dskiba eng 986769 Apr 13 13:41 000009.ldb -rw-r--r-- 1 dskiba eng 1008794 Apr 13 13:41 000009.log I.e. .log is eventually truncated, but then new .ldb is created. This is ridiculous - the DB is empty, and it grows. If reuse_logs is false, then .log is 246619, and .ldb is 79183*3 max: -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000005.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000007.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000008.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000007.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000010.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000011.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000007.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000010.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000013.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000014.log -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000017.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000019.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000020.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000019.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000022.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:46 000023.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000019.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:46 000022.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000025.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000026.log -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000029.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000031.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000032.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000031.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000034.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000035.log -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000031.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000034.ldb -rw-r--r-- 1 dskiba eng 79183 Apr 13 13:47 000037.ldb -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000038.log -rw-r--r-- 1 dskiba eng 246619 Apr 13 13:47 000041.log However, the memory usage for the max case (246619 .log + 79183*3 .ldb) still might be too high.
,
Apr 13 2017
pwnall@, cmumford@ - please comment on #14.
,
Apr 13 2017
I skimmed through the CLs and I'm having trouble figuring out what changed in 3065. Did use leveldb before? Did the usage change? Do you suspect that it's a leveldb change? (The last significant update there was on March 2nd, around the M58 branch point.)
,
Apr 13 2017
Shouldn't we not have any of the database files when the users does not enable data saver? I mean DataStoreImpl::InitializeOnDBThread() should not do anything when data saver is off.
,
Apr 13 2017
The only code that changed was a line that calls delete on what should be an empty database that we assumed would do nothing. Otherwise, all the DB code has been around for a while and was landed by kundaji@, who is no longer on our team, for our desktop extension. Here's a fix I'm going to land: https://chromiumcodereview.appspot.com/2810393002/ We can visit the InitializeOnDBThread() separately if that works for you.
,
Apr 13 2017
Right, this workarounds the problem, but the question remains - why LevelDB grows when bunch of keys are deleted from an empty DB? I.e. the same problem can pop up somewhere else in Chrome.
,
Apr 13 2017
Ya, it's not the expected behavior. Do you want to take ownership for looking into this or finding someone to hand it off to? It's an interesting problem, I just don't have the cycles right now.
,
Apr 13 2017
I was hoping to hear something from LevelDB guys (pwnall, cmumford). BTW, I wouldn't enable kDataReductionSiteBreakdown experiment widely until LevelDB memory is confirmed to be under control.
,
Apr 13 2017
I think leveldb::Options::write_buffer_size should control the size of the log files. The default value is set to 4MB, so I think the log files can grow upto 4MB before getting cleared?
,
Apr 13 2017
,
Apr 13 2017
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/945a5edc332c030923556edde964cce5a4e24795 commit 945a5edc332c030923556edde964cce5a4e24795 Author: megjablon <megjablon@chromium.org> Date: Thu Apr 13 23:17:21 2017 Fix memory issues with the Data Reduction proxy LevelDB Delete the whole database on DeleteHistoricalDataUsage so that the log file isn't bloated with deletes to entries that don't exist. Also, don't reuse the log on open. Store if there's historical data in a pref and only clear the data usage stats if it's been populated before. BUG= 710909 Review-Url: https://codereview.chromium.org/2810393002 Cr-Commit-Position: refs/heads/master@{#464600} [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_reduction_proxy_test_utils.cc [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_reduction_proxy_test_utils.h [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_store.cc [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_store.h [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_store_impl.cc [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_store_impl.h [modify] https://crrev.com/945a5edc332c030923556edde964cce5a4e24795/components/data_reduction_proxy/core/browser/data_usage_store.cc
,
Apr 13 2017
So yeah, write_buffer_size is the thing, but it only seems to matter when reuse_logs is true. See the diff for ldb.cc where I log approximate-memory-usage after Open(). Adding 5760 keys over and over to an empty database opened reuse_logs=true yields this: approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 389880 approximate-memory-usage after Open(): 775656 approximate-memory-usage after Open(): 1161432 approximate-memory-usage after Open(): 1547208 approximate-memory-usage after Open(): 1932984 approximate-memory-usage after Open(): 2318760 approximate-memory-usage after Open(): 2704536 approximate-memory-usage after Open(): 3090312 approximate-memory-usage after Open(): 3480192 approximate-memory-usage after Open(): 3865968 approximate-memory-usage after Open(): 57456 approximate-memory-usage after Open(): 443232 approximate-memory-usage after Open(): 829008 approximate-memory-usage after Open(): 1218888 approximate-memory-usage after Open(): 1604664 approximate-memory-usage after Open(): 1990440 approximate-memory-usage after Open(): 2376216 approximate-memory-usage after Open(): 2761992 approximate-memory-usage after Open(): 3147768 approximate-memory-usage after Open(): 3533544 approximate-memory-usage after Open(): 3919320 approximate-memory-usage after Open(): 110808 If reuse_logs=false, the situation is way more stable: approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 approximate-memory-usage after Open(): 4104 The fix changed reuse_logs to false, so data reduction proxy code is fine, but other places just use leveldb_env::kDefaultLogReuseOptionValue, which is true everywhere except ChromeOS. I'll file a separate bug for that. megjablon@ - I somehow missed the fact that you also set reuse_logs to false - with that I think memory usage should be fine.
,
Apr 13 2017
,
Apr 14 2017
Filed issue 711518 for general leveldb::ChromiumEnv Android tuning.
,
Apr 14 2017
,
Apr 14 2017
,
Apr 14 2017
,
Apr 14 2017
Issue 709638 has been merged into this issue.
,
Apr 17 2017
UMA shows that the memory spike has dropped https://uma.googleplex.com/timeline_v2?sid=a028bbf539f5c4323adcb61dfb23f783 |
|||||||||||||
►
Sign in to add a comment |
|||||||||||||
Comment 1 by ssid@chromium.org
, Apr 12 2017