Out-of-memory in v4_get_hash_protocol_manager_fuzzer |
|||||||||
Issue descriptionDetailed report: https://clusterfuzz.com/testcase?key=5918744310972416 Fuzzer: libFuzzer_v4_get_hash_protocol_manager_fuzzer Job Type: libfuzzer_chrome_asan_debug Platform Id: linux Crash Type: Out-of-memory (exceeds 2048 MB) Crash Address: Crash State: v4_get_hash_protocol_manager_fuzzer Sanitizer: address (ASAN) Regressed: https://clusterfuzz.com/revisions?job=libfuzzer_chrome_asan_debug&range=583285:583294 Reproducer Testcase: https://clusterfuzz.com/download?testcase_id=5918744310972416 Issue filed automatically. See https://chromium.googlesource.com/chromium/src/+/master/testing/libfuzzer/reference.md for more information.
,
Oct 18
mmoroz@, seems like it's libFuzzer related OOM and worth to take a look?
,
Oct 18
libFuzzer is just a fuzzing engine that generates inputs for a target and analyzes its coverage. To look into this particular OOM, we would need Safe Browsing OWNERS (CC'd).
,
Oct 18
Dan do you want to take a look at this? I think this bug implies there's some response that Safe Browsing could send to Chrome that would cause it to OOM. It's probably very unlikely (might require a perfect bug in SB, or even in the proto marsheling code), the consequences would be high.
,
Oct 18
Yep, I can take a look.
,
Oct 19
I wrote most of that code (a while ago) so please feel free to pass it to me or to consult.
,
Oct 19
,
Oct 20
This is not be related to the SafeBrowsing code. If you remove everything but the ParseFromString, you still run out of memory. That being said, I did some further investigation, and I think this is all working as intended. If you print the RSS before each run of the target, it starts at about 1600 MB (this seems to all be overhead from libFuzzer, and lines up with the live allocations listed here https://clusterfuzz.com/v2/testcase-detail/5918744310972416). The RSS rises each iteration, plateauing around 2200 MB. Since ASAN doesn't find any leaks, I'm inclined to say this is just overhead from allocators behind the scenes, but I don't know a ton about the proto memory management. Max, is there any way to increase the memory limit for this target, or to limit the size of the proto created by libFuzzer? Or do you have a better suggestion for preventing further bugs of this kind?
,
Oct 23
Dan, thanks for the analysis. It's kinda weird that the target eats so much memory, but this is a debug build though, the release one does not seem to be so hungry. -runs=1 -> 178 MB -runs=100 -> 570 MB -runs=10000 -> 574 MB I'm also seeing that this crash is not happening frequently, so let's just WontFix it. Unfortunately we cannot use a higher memory limit as that would require us to use another VMs for fuzzing which we can afford to do :)
,
Oct 30
ClusterFuzz testcase 5918744310972416 is still reproducing on tip-of-tree build (trunk). If this testcase was not reproducible locally or unworkable, ignore this notification and we will file another bug soon with hopefully a better and workable testcase. Otherwise, if this is not intended to be fixed (e.g. this is an intentional crash), please add ClusterFuzz-Ignore label to prevent future bug filing with similar crash stacktrace. |
|||||||||
►
Sign in to add a comment |
|||||||||
Comment 1 by kkaluri@chromium.org
, Sep 7