Issue metadata
Sign in to add a comment
|
Need a mechanism to prune memory traces that are too large. |
||||||||||||||||||||||
Issue descriptionThis happened to me all the time when trying to take memory dumps. It looks like users are also able to *somehow* record traces that are way too large [468MB uncompressed] https://bugs.chromium.org/p/chromium/issues/detail?id=703075
,
Apr 3 2017
Did you take a look to third_party/catapult/tracing/bin/strip_memory_infra_trace ? The module doc says """Filters a big trace keeping only the last memory-infra dumps.""" And the CL description that introduced it (https://codereview.chromium.org/2184783002): Author: primiano <primiano@chromium.org> Date: Wed Jul 27 13:38:12 2016 Add tracing/bin/strip_memory_infra_trace We are running frequently in cases where traces w/ memory-infra are too big and fail to load, as their uncompressed size > 256M. While this is going to be fixed by bit.ly/TracingV2 we need a short term solution to open those traces, to deal with cases like crbug.com/623945. This script strips a trace with memory-infra keeping only the last two global dumps. It still guarantees that all processes for a dump are seen, and tries to maintain the detailed dumps, if any. Which probably is what you want ?
,
Apr 3 2017
Also, FYI, even without TracingV2 (proto stuff, punted) somebody is working on being able to import >256M by doing progressive parsing of the json in https://github.com/catapult-project/catapult/issues/2826 |
|||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||
Comment 1 by erikc...@chromium.org
, Mar 31 2017