Automatically assigning owner based on suspected regression changelist https://chromium.googlesource.com/chromium/src/+/e3140a8f27345d395ea75fe619d730951a438e89 (Run SQLite DBFuzz2 on ClusterFuzz to fuzz for data corruption).
If this is incorrect, please let us know why and apply the Test-Predator-Wrong-CLs label. If you aren't the correct owner for this issue, please unassign yourself as soon as possible so it can be re-triaged.
ClusterFuzz testcase 5899964812886016 is still reproducing on tip-of-tree build (trunk).
If this testcase was not reproducible locally or unworkable, ignore this notification and we will file another bug soon with hopefully a better and workable testcase.
Otherwise, if this is not intended to be fixed (e.g. this is an intentional crash), please add ClusterFuzz-Ignore label to prevent future bug filing with similar crash stacktrace.
Richard and Dan, could you please take a look at this test case?
There is no clear sign of a bug here, but I figured it's worth looking into such a small test case that leads to so much memory use. On MSAN, this fails with -rss_limit_mb=2048 (2GB), while on ASAN it fails with -rss_limit_mb=1024 (1GB). If it doesn't expose any pathological behavior, please let me know and I'll close the bug.
The corruption in the database file indicates that the key to a row in one of the tables is 1876803328 bytes in size. When SQLite goes to malloc() for the space to hold that key, prior to attempting to read it, that exceeds the memory limitation when running under MSAN. Without MSAN, Linux does a lazy allocation, the allocated memory never gets used because the corruption is discovered shortly after the allocation and so no OOM happens. Trouble only arises when you turn on MSAN.
We will continue to investigate ways for SQLite to detect this problem sooner, prior to running the malloc(). But I think there is no need for you to patch anything.
Thank you very much for the quick investigation, Richard!
The one check I can think of is verifying that the read will not go past the end of the file, assuming you keep track of the file size for other reasons, and the number is close by and easy to plumb through. I could understand not wanting to pay the complexity / performance cost for this, though.
While the check isn't perfect, it doesn't seem completely unreasonable to me that an app's worst-case memory consumption would be proportional to the size of the DB.
ClusterFuzz testcase 5899964812886016 is verified as fixed, so closing issue as verified.
If this is incorrect, please add ClusterFuzz-Wrong label and re-open the issue.
Comment 1 by ClusterFuzz
, Dec 13Labels: ClusterFuzz-Auto-CC