[Intel-gfx] [PATCH i-g-t 2/2] igt: Add VC4 purgeable BO tests

Eric Anholt eric at anholt.net
Wed Sep 27 18:05:51 UTC 2017


Boris Brezillon <boris.brezillon at free-electrons.com> writes:

> On Wed, 27 Sep 2017 13:50:30 +0100
> Chris Wilson <chris at chris-wilson.co.uk> wrote:
>
>> Quoting Boris Brezillon (2017-09-27 13:41:41)
>> > Hi Chris,
>> > 
>> > On Wed, 27 Sep 2017 13:07:28 +0100
>> > Chris Wilson <chris at chris-wilson.co.uk> wrote:
>> >   
>> > > Quoting Boris Brezillon (2017-09-27 12:51:18)  
>> > > > +static void igt_vc4_trigger_purge(int fd)
>> > > > +{    
>> > > 
>> > > May I suggest a /proc/sys/vm/drop_caches-esque interface?
>> > > For when you want to explicitly control reclaim.  
>> > 
>> > Eric suggested to add a debugfs entry to control the purge, I just
>> > thought I didn't really need it since I had a way to trigger this
>> > mechanism without adding yet another userspace -> kernel interface that
>> > will become part of the ABI and will have to be maintained forever.
>> > 
>> > If you think this is preferable, I'll go for the debugfs hook.  
>> 
>> I think you will find it useful in future. i915's drop-caches also has
>> options to make sure the GPU is idle, delayed frees are flushed, etc.
>> One thing we found useful is that through a debugfs interface, we can
>> pretend to be the shrinker/in-reclaim, setting
>> fs_reclaim_acquire(GFP_KERNEL) around the operation. That gives us
>> better lockdep coverage without having to trigger the shrinker.
>> 
>> Our experience says that you will make good use of a drop-caches
>> interface, it won't just be a one test wonder. :)
>
> Just had a look at i915_drop_caches_fops [1] and it seems
> over-complicated given what I can do in the VC4 driver: flush memory of
> BOs that are marked purgeable.
>
> Right now there's no shrinker object registered to the MM layer to help
> the system release memory. The only one who can trigger a purge is the
> VC4 BO allocator when it fails to allocate CMA memory.
> Also note that all VC4 BOs are backed by CMA mem, so I'm not sure
> plugging the BO purge system to the MM shrinker logic makes a lot of
> sense (is the MM core expecting shrinkers to release memory coming from
> the CMA pool?)

Given that general page cache stuff can live in CMA, freeing CMA memory
from your shrinker callback should be good for MM.  So, yeah, it would
be great if (in a later patchset) the mesa BO cache gets purged when the
system is busy doing non-graphics tasks and wants the memory back.

Also, I just landed the userspace side of BO labeling, so
/debug/dri/0/bo_stats will have a lot more useful information in it.  We
should probably have the mark-purgeable path in the kernel label the BO
as purgeable (erasing whatever previous label the BO had).  Then, maybe
after we can make it so that most allocations label their BOs, not just
debug Mesa builds.  Need to do some performance testing there.

> All this to say I'm not comfortable with designing a generic
> "drop_caches" debugfs hook that would take various options to delimit
> the scope of the cache-flush request. I'd prefer to have a simple
> "purge_purgeable_bos" file that does not care about the input value and
> flushes everything as soon as someone writes to it.
> But let's wait for Eric's feedback, maybe he has other plans and a
> better vision of what will be needed after this simple "purgeable-bo"
> implementation I'm about to post.

I thought your use of allocations to force purging was pretty elegant.
Also, it means that we'll be driving the purging code from the same
codepath as Mesa will be (BO allocation) rather than something slightly
different.

I think we'll want debugfs to test the shrinker path, since I don't know
of another good way for userspace to trigger it reliably without also
destabilizing the testing environment.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/intel-gfx/attachments/20170927/f448b05c/attachment-0001.sig>


More information about the Intel-gfx mailing list