New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 767336 link

Starred by 4 users

Media fuzzers accumulate massive amounts of histogram storage leading to OOM.

Project Member Reported by ClusterFuzz, Sep 21 2017

Issue description

Detailed report: https://clusterfuzz.com/testcase?key=5907467442847744

Fuzzer: libFuzzer_mediasource_MP3_pipeline_integration_fuzzer
Job Type: mac_libfuzzer_chrome_asan
Platform Id: mac

Crash Type: Out-of-memory (exceeds 2048 MB)
Crash Address: 
Crash State:
  mediasource_MP3_pipeline_integration_fuzzer
  
Sanitizer: address (ASAN)

Reproducer Testcase: https://clusterfuzz.com/download?testcase_id=5907467442847744

Issue filed automatically.

See https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/reproducing.md for more information.

Note: This crash might not be reproducible with the provided testcase. That said, for the past 14 days we've been seeing this crash frequently. If you are unable to reproduce this, please try a speculative fix based on the crash stacktrace in the report. The fix can be verified by looking at the crash statistics in the report, a day after the fix is deployed. We will auto-close the bug if the crash is not seen for 14 days.
 
Cc: msrchandra@chromium.org kkaluri@chromium.org
Components: Blink>Media
Labels: -Pri-1 M-63 CF-NeedsTriage Test-Predator-Wrong Pri-2
Redo Task has been performed for a regression range.

Thank You.
Project Member

Comment 2 by ClusterFuzz, Sep 21 2017

Labels: OS-Linux
Components: -Blink>Media Internals>Media
Cc: wolenetz@chromium.org
Owner: dalecur...@chromium.org
Status: Assigned (was: Untriaged)
Will take a look at this and see about duping into one of our existing OOM bugs.
Cc: mmoroz@chromium.org
Seems like all of the allocations are in histogram storage. +mmoroz: Should we have the fuzzers route histogram to /dev/null? 

Live Heap Allocations: 1080502750 bytes in 15496432 chunks; quarantined: 9470101 bytes in 35908 chunks; 41396 other chunks; total chunks: 15573736; showing top 95% (at most 8 unique contexts)
85952544 byte(s) (7%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x10ce0bc9c in __allocate third_party/llvm-build/Release+Asserts/include/c++/v1/new:226:10
#2 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1747
#3 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1502
#4 0x10ce0bc9c in std::__1::vector<int, std::__1::allocator<int> >::allocate(unsigned long) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:926
#5 0x10d0be383 in std::__1::vector<int, std::__1::allocator<int> >::vector(unsigned long, int const&) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:1098:9
#6 0x10d14fc24 in base::BucketRanges::BucketRanges(unsigned long) base/metrics/bucket_ranges.cc:107:7
#7 0x10d165bb0 in base::Histogram::Factory::CreateRanges() base/metrics/histogram.cc:131:32
#8 0x10d15b49e in base::Histogram::Factory::Build() base/metrics/histogram.cc:166:42
#9 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#10 0x10d1bb807 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:175:42
    #11 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
    #12 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
    #13 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
    #14 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
    #15 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
    #16 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
    #17 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
    #18 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
    #19 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
    #20 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
    #21 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
    #22 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
    #23 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
    #24 0x7fff925dd5ac in start
85952544 byte(s) (7%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x10ce0bc9c in __allocate third_party/llvm-build/Release+Asserts/include/c++/v1/new:226:10
#2 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1747
#3 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1502
#4 0x10ce0bc9c in std::__1::vector<int, std::__1::allocator<int> >::allocate(unsigned long) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:926
#5 0x10d0be383 in std::__1::vector<int, std::__1::allocator<int> >::vector(unsigned long, int const&) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:1098:9
#6 0x10d14fc24 in base::BucketRanges::BucketRanges(unsigned long) base/metrics/bucket_ranges.cc:107:7
#7 0x10d165bb0 in base::Histogram::Factory::CreateRanges() base/metrics/histogram.cc:131:32
#8 0x10d15b49e in base::Histogram::Factory::Build() base/metrics/histogram.cc:166:42
#9 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#10 0x10d1bb746 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:165:42
    #11 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
    #12 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
    #13 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
    #14 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
    #15 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
    #16 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
    #17 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
    #18 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
    #19 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
    #20 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
    #21 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
    #22 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
    #23 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
    #24 0x7fff925dd5ac in start
85952544 byte(s) (7%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x10ce0bc9c in __allocate third_party/llvm-build/Release+Asserts/include/c++/v1/new:226:10
#2 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1747
#3 0x10ce0bc9c in allocate third_party/llvm-build/Release+Asserts/include/c++/v1/memory:1502
#4 0x10ce0bc9c in std::__1::vector<int, std::__1::allocator<int> >::allocate(unsigned long) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:926
#5 0x10d0be383 in std::__1::vector<int, std::__1::allocator<int> >::vector(unsigned long, int const&) third_party/llvm-build/Release+Asserts/include/c++/v1/vector:1098:9
#6 0x10d14fc24 in base::BucketRanges::BucketRanges(unsigned long) base/metrics/bucket_ranges.cc:107:7
#7 0x10d165bb0 in base::Histogram::Factory::CreateRanges() base/metrics/histogram.cc:131:32
#8 0x10d15b49e in base::Histogram::Factory::Build() base/metrics/histogram.cc:166:42
#9 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#10 0x10d15cee1 in base::Histogram::FactoryTimeGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::TimeDelta, base::TimeDelta, unsigned int, int) base/metrics/histogram.cc:267:10
#11 0x10d1bb685 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:156:34
#12 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#13 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
#14 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#15 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
#16 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
#17 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
#18 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
#19 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
#20 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
#21 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
#22 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
#23 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
#24 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
#25 0x7fff925dd5ac in start
33706880 byte(s) (3%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x7fff8be22204  (/usr/lib/libc++.1.dylib:x86_64+0x3f204)
#1 0x10d1682fa in base::HistogramBase::HistogramBase(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) base/metrics/histogram_base.cc:67:7
#2 0x10d16143d in base::Histogram::Histogram(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, base::BucketRanges const*) base/metrics/histogram.cc:612:7
#3 0x10d165e10 in base::Histogram::Factory::HeapAlloc(base::BucketRanges const*) base/metrics/histogram.cc:140:27
#4 0x10d15b8e0 in base::Histogram::Factory::Build() base/metrics/histogram.cc:208:29
#5 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#6 0x10d15cee1 in base::Histogram::FactoryTimeGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::TimeDelta, base::TimeDelta, unsigned int, int) base/metrics/histogram.cc:267:10
#7 0x10d1bb685 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:156:34
#8 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#9 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
#10 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#11 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
#12 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
#13 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
#14 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
#15 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
#16 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
#17 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
#18 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
#19 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
#20 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
#21 0x7fff925dd5ac in start
33706880 byte(s) (3%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x7fff8be22204  (/usr/lib/libc++.1.dylib:x86_64+0x3f204)
#1 0x10d1682fa in base::HistogramBase::HistogramBase(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) base/metrics/histogram_base.cc:67:7
#2 0x10d16143d in base::Histogram::Histogram(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, base::BucketRanges const*) base/metrics/histogram.cc:612:7
#3 0x10d165e10 in base::Histogram::Factory::HeapAlloc(base::BucketRanges const*) base/metrics/histogram.cc:140:27
#4 0x10d15b8e0 in base::Histogram::Factory::Build() base/metrics/histogram.cc:208:29
#5 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#6 0x10d1bb746 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:165:42
#7 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#8 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
#9 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#10 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
#11 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
#12 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
#13 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
#14 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
#15 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
#16 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
#17 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
#18 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
#19 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
#20 0x7fff925dd5ac in start
33706880 byte(s) (3%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x7fff8be22204  (/usr/lib/libc++.1.dylib:x86_64+0x3f204)
#1 0x10d1682fa in base::HistogramBase::HistogramBase(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) base/metrics/histogram_base.cc:67:7
#2 0x10d16143d in base::Histogram::Histogram(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, base::BucketRanges const*) base/metrics/histogram.cc:612:7
#3 0x10d165e10 in base::Histogram::Factory::HeapAlloc(base::BucketRanges const*) base/metrics/histogram.cc:140:27
#4 0x10d15b8e0 in base::Histogram::Factory::Build() base/metrics/histogram.cc:208:29
#5 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
#6 0x10d1bb807 in base::internal::SchedulerWorkerPoolImpl::SchedulerWorkerPoolImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, base::ThreadPriority, base::internal::TaskTracker*, base::internal::DelayedTaskManager*) base/task_scheduler/scheduler_worker_pool_impl.cc:175:42
#7 0x10d1ca482 in make_unique<base::internal::SchedulerWorkerPoolImpl, std::__1::basic_string<char>, const base::ThreadPriority &, base::internal::TaskTrackerPosix *, base::internal::DelayedTaskManager *> third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#8 0x10d1ca482 in base::internal::TaskSchedulerImpl::TaskSchedulerImpl(base::BasicStringPiece<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::unique_ptr<base::internal::TaskTrackerPosix, std::__1::default_delete<base::internal::TaskTrackerPosix> >) base/task_scheduler/task_scheduler_impl.cc:37
#9 0x10d261994 in make_unique<base::internal::TaskSchedulerImpl, char const (&)[22], std::__1::unique_ptr<base::test::ScopedTaskEnvironment::TestTaskTracker, std::__1::default_delete<base::test::ScopedTaskEnvironment::TestTaskTracker> > > third_party/llvm-build/Release+Asserts/include/c++/v1/memory:3026:32
#10 0x10d261994 in base::test::ScopedTaskEnvironment::ScopedTaskEnvironment(base::test::ScopedTaskEnvironment::MainThreadType, base::test::ScopedTaskEnvironment::ExecutionMode) base/test/scoped_task_environment.cc:110
#11 0x10cce6747 in media::PipelineIntegrationTestBase::PipelineIntegrationTestBase() media/test/pipeline_integration_test_base.cc:107:30
#12 0x10ccca992 in media::MediaSourcePipelineIntegrationFuzzerTest::MediaSourcePipelineIntegrationFuzzerTest() media/test/pipeline_integration_fuzzertest.cc:152:3
#13 0x10ccc60f1 in LLVMFuzzerTestOneInput media/test/pipeline_integration_fuzzertest.cc:221:53
#14 0x10cd46dfb in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) third_party/libFuzzer/src/FuzzerLoop.cpp:463:13
#15 0x10cd46453 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*) third_party/libFuzzer/src/FuzzerLoop.cpp:392:3
#16 0x10cd4a101 in fuzzer::Fuzzer::MutateAndTestOne() third_party/libFuzzer/src/FuzzerLoop.cpp:587:9
#17 0x10cd4b31e in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) third_party/libFuzzer/src/FuzzerLoop.cpp:699:5
#18 0x10cd2ef5f in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) third_party/libFuzzer/src/FuzzerDriver.cpp:738:6
#19 0x10cd5c55d in main third_party/libFuzzer/src/FuzzerMain.cpp:20:10
#20 0x7fff925dd5ac in start
26965504 byte(s) (2%) in 421336 allocation(s)
#0 0x10f682702 in __sanitizer_finish_switch_fiber
#1 0x10d165da6 in base::Histogram::Factory::HeapAlloc(base::BucketRanges const*) base/metrics/histogram.cc:140:23
#2 0x10d15b8e0 in base::Histogram::Factory::Build() base/metrics/histogram.cc:208:29
#3 0x10d15c517 in base::Histogram::FactoryGet(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, unsigned int, int) base/metrics/histogram.cc:259:63
Components: Internals>Metrics
+metrics folks, perhaps fuzzer should be clearing histograms between runs. Since they are static singletons this builds up over the lifetime of fuzzing.

Comment 7 by mmoroz@chromium.org, Sep 30 2017

Yeah, it looks like we need to either redirect that histogram stuff or suppress it somehow else.
Cc: -mmoroz@chromium.org tguilbert@chromium.org dalecur...@chromium.org
Owner: mmoroz@chromium.org
Summary: Media fuzzers accumulate massive amounts of histogram storage leading to OOM. (was: Out-of-memory in mediasource_MP3_pipeline_integration_fuzzer)
Renaming to reflect the issue here and reassigning. Max let me know if you're the wrong person for this? If you're overloaded I'll try to get to it in a couple weeks. I'm going to start looking through our other OOMs and dupe them here if needed.
Metrics folks, do you have any suggestions for clearing histogram allocations per fuzzer run?
If fuzzing is triggering it, then it sounds like it's a problem that can affect users as well.

I think solution is to figure out which histograms are affected and putting addition validation to ensure they're not being logged with garbage data.

In particular, I could imagine two types of unbounded growth:
  1. New histograms being created as a result of passing new names to the FactoryGet() functions.
  2. Lots of unique values being logged to a sparse histogram, causing its std::map storage to keep growing.

For case 1, if we're dynamically generating names, then this is a problem - because the expectation is that all histograms are documented in histograms.xml - so if some codepath is generating things that are not the finite number of things listed in XML, this is a bug and should be fixed.

For case 2, the code that is doing this should be fixed to validate that the values its logging are in the expected range. It's possible that this validation needs to happen at an earlier stage than logging the histogram as well - this would depend on the case in question.
Specifically, for the stack traces above, it looks like fuzzing is affecting the names being passed in to the SchedulerWorkerPoolImpl() ctor - which is a bug because the expectation is there should be a fixed set of names there. Suggest discussing with the owners of that code and those histograms.
Cc: gab@chromium.org fdoray@chromium.org
+scheduler folk since that seems to be the primary accumulation for c#12.
Cc: mmoroz@chromium.org
Owner: ----
Unfortunately I'm not familiar with these internals + quite busy with Fuzzathon going on.

Whoever fixes this issue, please don't forget fill out Fuzzathon submission form ;) It should be a huge improvement.
Status: Available (was: Assigned)
It might be related to the test environment setup here?

https://cs.chromium.org/chromium/src/media/test/pipeline_integration_test_base.h?dr=C&l=157
Names appear to be statically generated in my local runs, so it seems there's some other way this is causing massive accumulation. Any ideas Alexei?

[1009/115832.804508:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool
[1009/115832.804575:ERROR:scheduler_worker_pool_impl.cc(182)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool
[1009/115832.804631:ERROR:scheduler_worker_pool_impl.cc(183)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool

Hmm, it appears each histogram is creating a new instance?

TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x607000001a60
TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x607000001bb0
TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x607000001d00

TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002080
TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000021d0
TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002320

Would installing a statistics recorder for these tests help?
Whoops that paste above was for the wrong thing:

Run 1:
[1009/120619.545673:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x607000001a60
[1009/120619.545833:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x607000001bb0
[1009/120619.545881:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x607000001d00
[1009/120619.546078:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002080
[1009/120619.546121:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000021d0
[1009/120619.546159:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002320
[1009/120619.546339:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundPool : 0x6070000026a0
[1009/120619.546381:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundPool : 0x6070000027f0
[1009/120619.546415:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundPool : 0x607000002940
[1009/120619.546599:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002cc0
[1009/120619.546637:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002e10
[1009/120619.546678:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002f60

Run 2:
[1009/120619.969035:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x6070000a85d0
[1009/120619.969101:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x6070000a8720
[1009/120619.969140:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x6070000a8870
[1009/120619.969283:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000a8bf0
[1009/120619.969322:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000a8d40
[1009/120619.969355:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000a8e90
[1009/120619.969489:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundPool : 0x6070000a9210
[1009/120619.969527:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundPool : 0x6070000a9360
[1009/120619.969557:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundPool : 0x6070000a94b0
[1009/120619.969693:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000a9830
[1009/120619.969733:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000a9980
[1009/120619.969764:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000a9ad0

<different forevermore>

And with a base::HistogramTester:

Run 1:
[1009/120732.335569:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x607000001a60
[1009/120732.335720:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x607000001bb0
[1009/120732.335765:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x607000001d00
[1009/120732.335941:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002080
[1009/120732.335980:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x6070000021d0
[1009/120732.336015:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000002320
[1009/120732.336176:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundPool : 0x6070000026a0
[1009/120732.336213:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundPool : 0x6070000027f0
[1009/120732.336246:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundPool : 0x607000002940
[1009/120732.336415:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002cc0
[1009/120732.336451:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002e10
[1009/120732.336487:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000002f60

Run 2:
[1009/120732.771607:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x607000055440
[1009/120732.771720:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x607000055590
[1009/120732.771796:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x6070000556e0
[1009/120732.772102:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055a60
[1009/120732.772179:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055bb0
[1009/120732.772238:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055d00
[1009/120732.772552:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundPool : 0x607000056080
[1009/120732.772626:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundPool : 0x6070000561d0
[1009/120732.772687:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundPool : 0x607000056320
[1009/120732.773037:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000566a0
[1009/120732.773119:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000567f0
[1009/120732.773177:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000056940

Run 3:
[1009/120733.238078:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundPool : 0x607000055440
[1009/120733.238182:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundPool : 0x607000055590
[1009/120733.238244:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundPool : 0x6070000556e0
[1009/120733.238374:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055a60
[1009/120733.238435:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055bb0
[1009/120733.238490:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentBackgroundBlockingPool : 0x607000055d00
[1009/120733.238607:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundPool : 0x607000056080
[1009/120733.238664:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundPool : 0x6070000561d0
[1009/120733.238728:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundPool : 0x607000056320
[1009/120733.238848:ERROR:scheduler_worker_pool_impl.cc(181)] TaskScheduler.DetachDuration.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000566a0
[1009/120733.238911:ERROR:scheduler_worker_pool_impl.cc(184)] TaskScheduler.NumTasksBeforeDetach.ScopedTaskEnvironmentForegroundBlockingPool : 0x6070000567f0
[1009/120733.238961:ERROR:scheduler_worker_pool_impl.cc(187)] TaskScheduler.NumTasksBetweenWaits.ScopedTaskEnvironmentForegroundBlockingPool : 0x607000056940

< same forevermore> 

I'm not quite sure why the histogram tester have the same effect between run 1 and 2 vs run 3.
Seems calling base::StatisticsRecorder::Initialize() is the magic sauce.

@mmoroz: Is there a base file included in all fuzzers somewhere? run_all_unittests and testing/fuzzers/unittest_main doesn't seem to be included?
Doesn't look like there's a main include anywhere unfortunately.

Installing the StatisticsRecorder alongside the AtExitManager in the LLVMFuzzerTestOneInput() works, but kind of stinks that we'll need to go through and modify every fuzzer to add this.

I guess for now I'll just add it to the media one and any others that might be triggering OOM? Max is it just media suffering from OOM on histograms?
Owner: dalecur...@chromium.org
Status: Started (was: Available)
Dale, re c#20: there is no such a file as far as I know :(

re c#21: I wouldn't anticipate that not only media fuzzers suffer from that, since we don't have a deep analysis for other OOMs. Your plan SGTM. If it works, we may come up with a common base file for multiple fuzzers (we can disable logging in there as well).
Cc: pnangunoori@chromium.org
 Issue 772744  has been merged into this issue.
 Issue 770064  has been merged into this issue.
 Issue 770061  has been merged into this issue.
Cc: infe...@chromium.org
 Issue 769175  has been merged into this issue.
 Issue 766503  has been merged into this issue.
 Issue 770104  has been merged into this issue.
 Issue 770077  has been merged into this issue.
 Issue 770073  has been merged into this issue.
 Issue 770069  has been merged into this issue.
 Issue 770065  has been merged into this issue.
 Issue 770063  has been merged into this issue.
 Issue 769181  has been merged into this issue.
 Issue 769179  has been merged into this issue.
 Issue 769176  has been merged into this issue.
 Issue 768196  has been merged into this issue.
 Issue 767421  has been merged into this issue.
 Issue 767416  has been merged into this issue.
Cc: xhw...@chromium.org
 Issue 770588  has been merged into this issue.
Project Member

Comment 43 by bugdroid1@chromium.org, Oct 9 2017

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/9b2dcdc1d4948657ab940a29d19f9cb457322fc7

commit 9b2dcdc1d4948657ab940a29d19f9cb457322fc7
Author: Dale Curtis <dalecurtis@chromium.org>
Date: Mon Oct 09 23:13:49 2017

Install StatisticsRecorder to avoid histogram leaks in fuzzing.

Without this any calls to HistogramBase::FactoryGet() will generate
a new histogram instance every time. This leads to massive accumulation
over long running fuzzers.

BUG= 767336 
TEST=manual, histogram pointers in scheduler are static.

Change-Id: Iea5d0ae51aa33e3304fc47c188ffe266c6bd3ade
Reviewed-on: https://chromium-review.googlesource.com/707475
Commit-Queue: Dale Curtis <dalecurtis@chromium.org>
Reviewed-by: Max Moroz <mmoroz@chromium.org>
Reviewed-by: Ilya Sherman <isherman@chromium.org>
Cr-Commit-Position: refs/heads/master@{#507525}
[modify] https://crrev.com/9b2dcdc1d4948657ab940a29d19f9cb457322fc7/media/test/pipeline_integration_fuzzertest.cc

From a brief look through a search of crbug.com for "Out-of-memory fuzzer", media appears to be the only thing currently suffering from this AFAICT -- I've duped every bug I could find into this one, so hopefully this is now fixed...
Thanks a lot for the clean up! Looking forward to all of these going away :)
I've been trying to verify whether your change helped or not. One thing that has changed for sure is number of malloc and free calls. For example:

1) new build:

$ out/asan/media_pipeline_integration_fuzzer -rss_limit_mb=2048 -print_final_stats=1 -runs=100 ./clusterfuzz-testcase-minimized-5005845627928576 
<...>
<...>/out/asan/media_pipeline_integration_fuzzer: Running 1 inputs 100 time(s) each.
Running: ./clusterfuzz-testcase-minimized-5005845627928576
Executed ./clusterfuzz-testcase-minimized-5005845627928576 in 35222 ms
<...>
stat::number_of_executed_units: 102
stat::average_exec_per_sec:     2
stat::new_units_added:          0
stat::slowest_unit_time_sec:    0
stat::peak_rss_mb:              329


2) old build:

$ out/asan/media_pipeline_integration_fuzzer -rss_limit_mb=2048 -print_final_stats=1 -runs=100 ./clusterfuzz-testcase-minimized-5005845627928576 
<...>
<...>/out/asan/media_pipeline_integration_fuzzer: Running 1 inputs 100 time(s) each.
Running: ./clusterfuzz-testcase-minimized-5005845627928576
Executed ./clusterfuzz-testcase-minimized-5005845627928576 in 66825 ms
<...>
stat::number_of_executed_units: 200
stat::average_exec_per_sec:     3
stat::new_units_added:          0
stat::slowest_unit_time_sec:    0
stat::peak_rss_mb:              326



When libFuzzer sees a mismatch between number of malloc and free calls, it does an extra run to check if there is a memory leak. It doesn't seem to happen after your change, which is definitely an improvement.

------------------------------------------------------------------------------

As for memory usage, it also seems to be a positive change. I've tried running the target using the corpus stored in the GCS bucket.

1) new build:

$ out/asan/media_pipeline_integration_fuzzer -rss_limit_mb=2048 -print_final_stats=1 -close_fd_mask=1 ./media_pipeline_integration_fuzzer/
INFO: Seed: 518680963
INFO: Loaded 2 modules   (345313 guards): 17174 [0x7f6ed2d267a0, 0x7f6ed2d373f8), 328139 [0x29543a8, 0x2a94ad4), 
INFO:    11158 files found in ./media_pipeline_integration_fuzzer/
INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 1048576 bytes
INFO: seed corpus: files: 11158 min: 1b max: 4389024b total: 104695641b rss: 79Mb
#16	pulse  cov: 7540 ft: 8333 corp: 13/41b exec/s: 8 rss: 118Mb
#32	pulse  cov: 10199 ft: 13282 corp: 28/124b exec/s: 8 rss: 134Mb
#64	pulse  cov: 10919 ft: 14972 corp: 59/384b exec/s: 8 rss: 168Mb
#128	pulse  cov: 11605 ft: 18813 corp: 116/934b exec/s: 8 rss: 237Mb
#256	pulse  cov: 12518 ft: 23969 corp: 215/2051b exec/s: 6 rss: 300Mb
#512	pulse  cov: 13080 ft: 26137 corp: 430/4996b exec/s: 6 rss: 371Mb
#1024	pulse  cov: 13953 ft: 32184 corp: 810/11621b exec/s: 6 rss: 412Mb
#2048	pulse  cov: 15163 ft: 39351 corp: 1484/29Kb exec/s: 5 rss: 462Mb
INFO: libFuzzer disabled leak detection after every mutation.
      Most likely the target function accumulates allocated
      memory in a global state w/o actually leaking it.
      You may try running this binary with -trace_malloc=[12]      to get a trace of mallocs and frees.
      If LeakSanitizer is enabled in this process it will still
      run on the process shutdown.
#4096	pulse  cov: 19974 ft: 57579 corp: 3018/117Kb exec/s: 5 rss: 797Mb
^C==51943== libFuzzer: run interrupted; exiting


2) old build:

#2048	pulse  cov: 14403 ft: 34659 corp: 1034/16Kb exec/s: 5 rss: 497Mb
#4096	pulse  cov: 19986 ft: 57875 corp: 3056/120Kb exec/s: 7 rss: 1089Mb
^C==100473== libFuzzer: run interrupted; exiting

Comparison of memory usage after 4096 testcases executed (797Mb vs 1089Mb) looks as a good signal as well!

Status: Fixed (was: Started)
Nice! The stats button for the CF in c#0 shows only 1 linux OOM in the past 24 hours vs 25-45 in days past. So I think we can consider at this fixed?
It'd be interesting to get the OOM information for instances still failing, there are a couple, so there may be another non-histograms leak of memory somewhere.
I can't seem to figure out how to go from the stats page to a stack trace of an instance though. Is that possible?
Dale, I'm afraid there is no way to do one-click-jump from fuzzer stats to crash stats, but the crash stats page itself should be very helpful: https://clusterfuzz.com/v2/crash-stats?block=day&days=7&end=418816&group=platform&number=count&q=pipeline_integration_fuzzer&sort=total_count

There are daily graphs for each crash type we observe from  "*_pipeline_integration_fuzzer". It's easy to see that many of them have gone yesterday, while others still persist.
That page is awesome, but even there I don't see how to get a new memory log post my change though. I.e. so I can analyze the reported largest allocations for current failures. Is there a way to go about that besides trying to repro locally?
Hmm actually I guess the remaining ones on that page that still have repros are single target ones, not failures after xxxxx runs. Will take a closer look at some of them.
Dale, yes, the problem is that some crashes are not reproducible. For those we have only original stacktrace that includes logs from fuzzing using an "old" revision :(

For reproducible ones, we have two stacktraces from running a single testcase:
1) Original Stacktrace form the revision when the crash happened for the first time
2) Last Tested Stacktrace from a recent revision

But I assume that would not help you, as these crashes either are fixed & auto-closed or are still reproducible and need taking a closer look, as you figured out in c#52.

Project Member

Comment 54 by ClusterFuzz, Oct 17 2017

Labels: Needs-Feedback
ClusterFuzz testcase 5005845627928576 is still reproducing on tip-of-tree build (trunk).

Please re-test your fix against this testcase and if the fix was incorrect or incomplete, please re-open the bug. Otherwise, ignore this notification and add ClusterFuzz-Wrong label.
For the record I unduped  issue 770588  a while back -- that's currently assigned to hubbe@ for investigation.

Sign in to add a comment