New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 801366 link

Starred by 2 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 2
Type: Bug



Sign in to add a comment

add UMA stat and autotest for pinned GEM buffer

Project Member Reported by semenzato@chromium.org, Jan 11 2018

Issue description

As described in issue 800237, we may be experiencing a large number of i915 pinned GEM buffers, at least on some kernels.  (That issue does not report the actual number, but I've heard 800MB in a meeting.)

Since this may impact memory management, it may be useful to monitor the total usage of buffers via UMA, and also ensure (if possible) that there are no regressions with an autotest.

A few questions:

1. What is the expected average and maximum usage?  Is it feasible to set an upper bound, even as a function of other factors (number of visible tabs, screen resolution etc.)

2. I am postulating that the /sys/kernel/debug/dri/0/i915_gem_pinned is the number we want to monitor.  The last line reports the total size:

Total 11 objects, 901120 bytes, 901120 GTT size

How often should we sample this number to catch spikes?  If it changes too rapidly, should we change the sysfs so that it tracks the maximum value (say, maintain the max in the last few seconds as a separate sysfs).

3. Does it make sense to have an autotest, and what web site or app should we use in the test?  Or is it the case that a bug in the allocation/pinning of GEM buffers is potentially triggered by a range of specific situations whose number is too large to be practical for an autotest?

Thanks!

 
Pinned buffers are buffers currently in use by the GPU. So the right question to ask is why are we allocating and using all these buffers. A UMA stat for i915_gem_pinned isn't going to tell us that since it is much too low-level...
Status: Assigned (was: Untriaged)

Sign in to add a comment