New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 762076 link

Starred by 1 user

Issue metadata

Status: Untriaged
Owner: ----
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Linux , Android , Windows , Chrome , Mac
Pri: 3
Type: Bug



Sign in to add a comment

idea for reducing cost of VMRegion (mmaps) in the memory instrumentation codebase

Project Member Reported by primiano@chromium.org, Sep 5 2017

Issue description

Background: see  Issue 762032  and Issue 750696#c15 .
TL;DR sending the full mmaps for all processes requires ~1 MB and 10k heap allocations per each snapshot.

Idea: most of those mmaps are actually quite similar, i.e. they are anonymous (mapped file = "") entries.
I don't think they bring any particular value, so we could GROUP_BY(allocation.size) and instead of having 3 entries for

start  end     size   private_dirty   shared_dirty
0x1000 0x2000  4096   10              10
0x2000 0x3000  4096   20              20
0x3000 0x4000  4096   30              30
0x4000 0x6000  8192   10              10
0x6000 0x8000  8192   20              20

we could have

start   end   size  count   private_dirty  shared_dirty
---     ---   4096  3       60             60
---     ---   8192  2       30             30

would be nice to try this first on the output of a vmmap -v (or /proc/pid/smaps on Linux)  and see how many entries we could reduce this way.
I suspect that even for the heap profiler, we don't care about individual anonymous allocations.
 

Sign in to add a comment