No idea if this is something like leaking files into a memfs, or us leaking resources.
I ran:
out/fuch/bin/run_content_unittests --gtest_filter=CacheStorageManagerTest* --gtest_repeat=-1 --gtest_break_on_failure
Eventually (note 1918s timestamp, so after about a half hour), the OOM killer nuked 'boot' which not surprisingly brought things to an abrupt halt.
I think each repeat should be in a subprocess, so there shouldn't be so much it can be leaking. The test harness that launches subprocesses that run the tests could be leaking though, and it could just be accumulating files into /tmp.
Anyway, it's clearly a real OOM, and there wasn't much else to kill, so it's generally fine and this isn't actionable, but I just wanted to file a bug in case it comes up again for further tracking.
[01917.908] 02424.02469> [112/115] CacheStorageManagerTests/CacheStorageManagerTestP.StorageMatch_IgnoreMethod/1 (19 ms)
[01917.908] 02424.02469> [113/115] CacheStorageManagerTests/CacheStorageManagerTestP.StorageMatch_IgnoreVary/0 (7 ms)
[01917.908] 02424.02469> [114/115] CacheStorageManagerTests/CacheStorageManagerTestP.StorageMatch_IgnoreVary/1 (17 ms)
[01917.908] 02424.02469> [115/115] CacheStorageManagerTests/CacheStorageManagerTestP.StorageMatchAll_IgnoreSearch/0 (7 ms)
[01917.909] 02424.02469> SUCCESS: all tests passed.
[01918.433] 02424.02469> PageFault:577: ERROR: failed to fault in or grab existing page
[01918.433] 02424.02469> PageFault:578: 0xffffff8071cf7968 vmo_offset 0x9143e29000, pf_flags 0x1b
[01918.433] 01119.01152> <== fatal exception: process ./content_unittests[2424] thread initial-thread[2469]
[01918.433] 01119.01152> <== fatal page fault, PC at 0x4e32f4f07a03
[01918.433] 01119.01152> CS: 0 RIP: 0x4e32f4f07a03 EFL: 0x10206 CR2: 0x268f8313d000
[01918.433] 01119.01152> RAX: 0x268f8313c000 RBX: 0x77d1e4be6f70 RCX: 0x1f81 RDX: 0x2f81
[01918.433] 01119.01152> RSI: 0x269d69139000 RDI: 0x268f8313d000 RBP: 0x4357b5db54e0 RSP: 0x4357b5db54a8
[01918.433] 01119.01152> R8: 0x268f831550e0 R9: 0x58 R10: 0x268f83155138 R11: 0x268f83155108
[01918.433] 01119.01152> R12: 0x78b09e61 R13: 0x77d1e4be6da0 R14: 0x4e32f4f545f8 R15: 0xc
[01918.433] 01119.01152> errc: 0x6
[01918.433] 01119.01152> bottom of user stack:
[01918.433] 01119.01152> 0x00004357b5db54a8: 0ce69764 00005ccb f4f35340 00004e32 |d....\..@S..2N..|
[01918.433] 01119.01152> 0x00004357b5db54b8: 8313c000 0000268f 831317f0 0000268f |.....&.......&..|
[01918.433] 01119.01152> 0x00004357b5db54c8: 00002f81 00000000 69138000 0000269d |./.........i.&..|
[01918.433] 01119.01152> 0x00004357b5db54d8: 8313c000 0000268f b5db5690 00004357 |.....&...V..WC..|
[01918.433] 01119.01152> 0x00004357b5db54e8: 0cf238ce 00005ccb 831317f0 0000268f |.8...\.......&..|
[01918.433] 01119.01152> 0x00004357b5db54f8: 00002f81 00000000 00002f91 00000000 |./......./......|
[01918.433] 01119.01152> 0x00004357b5db5508: 8313c000 0000268f 00002f90 00000000 |.....&.../......|
[01918.433] 01119.01152> 0x00004357b5db5518: 9d686948 00002650 00002f81 00000000 |Hih.P&.../......|
[01918.433] 01119.01152> 0x00004357b5db5528: 83155108 0000268f f4f56140 00010032 |.Q...&..@a..2...|
[01918.433] 01119.01152> 0x00004357b5db5538: 00002f8f 00000000 8313c000 0000268f |./...........&..|
[01918.433] 01119.01152> 0x00004357b5db5548: 00002f81 00000000 69138000 0000269d |./.........i.&..|
[01918.433] 01119.01152> 0x00004357b5db5558: 83155108 0000268f ffffffff ffffffff |.Q...&..........|
[01918.433] 01119.01152> 0x00004357b5db5568: 83155108 0000268f 83155108 0000268f |.Q...&...Q...&..|
[01918.433] 01119.01152> 0x00004357b5db5578: 83155108 0000268f 30b5e008 0000263f |.Q...&.....0?&..|
[01918.433] 01119.01152> 0x00004357b5db5588: 9d689e60 00002650 83155108 0000268f |`.h.P&...Q...&..|
[01918.433] 01119.01152> 0x00004357b5db5598: 83155108 0000268f 00000006 00000000 |.Q...&..........|
[01918.433] 01119.01152> arch: x86_64
[01918.434] 01119.01152> PageFault:577: ERROR: failed to fault in or grab existing page
[01918.434] 01119.01152> PageFault:578: 0xffffff803d3bca50 vmo_offset 0x4936984000, pf_flags 0x1b
[01918.434] 01119.01338> PageFault:577: ERROR: failed to fault in or grab existing page
[01918.434] 01119.01338> PageFault:578: 0xffffff80777a0870 vmo_offset 0x9c84e6c000, pf_flags 0x1b
[01918.893] 00000.00000> OOM: 0M free (-186.0M) / 2046.9M total
[01918.893] 00000.00000> OOM: oom_lowmem(shortfall_bytes=52428800) called
[01918.893] 00000.00000> OOM: Process mapped committed bytes:
[01918.895] 00000.00000> OOM: proc 1044 675M 'bin/devmgr'
[01918.895] 00000.00000> OOM: proc 2811 24M 'devhost:pci#4:8086:2922'
[01918.899] 00000.00000> OOM: proc 2356 128M 'netstack'
[01918.948] 00000.00000> OOM: proc 2424 747M './content_unittests'
[01918.948] 00000.00000> OOM: Finding a job to kill...
[01918.948] 00000.00000> OOM: (skip) job 2656 'tcp:22'
[01918.948] 00000.00000> OOM: *KILL* job 2210 'boot'
[01918.948] 00000.00000> OOM: + proc 2275 run 'sh'
[01918.948] 00000.00000> OOM: + proc 2313 run 'listen'
[01918.948] 00000.00000> OOM: + proc 2356 run 'netstack'
[01918.948] 00000.00000> OOM: + proc 2424 run './content_unittests'
[01918.948] 00000.00000> OOM: + job 2656 'tcp:22'
[01918.948] 00000.00000> OOM: = 4 running procs (4 total), 1 jobs
[01918.948] 00000.00000> OOM: (next) job 1671 'root'
[01918.948] 00000.00000> OOM: (next) job 1117 'fuchsia'
[01918.948] 00000.00000> OOM: (next) job 1116 'zircon-services'
[01918.965] 02756.03036> eth [netstack]: tx_thread: exit: 0
[01918.985] 01711.02011> [ERROR:garnet/bin/bootstrap/app.cc(127)] Singleton /system/bin/netstack died
Comment 1 by w...@chromium.org
, Oct 30 2017