[Intel-gfx] [PATCH 03/10] drm/i915: Shrink the GEM kmem_caches upon idling
Chris Wilson
chris at chris-wilson.co.uk
Tue Jan 16 10:19:53 UTC 2018
Quoting Tvrtko Ursulin (2018-01-16 10:00:16)
>
> On 15/01/2018 21:24, Chris Wilson wrote:
> > When we finally decide the gpu is idle, that is a good time to shrink
> > our kmem_caches.
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > ---
> > drivers/gpu/drm/i915/i915_gem.c | 22 ++++++++++++++++++++++
> > 1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index a8840a514377..8547f5214599 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -4709,6 +4709,21 @@ i915_gem_retire_work_handler(struct work_struct *work)
> > }
> > }
> >
> > +static void shrink_caches(struct drm_i915_private *i915)
> > +{
> > + /*
> > + * kmem_cache_shrink() discards empty slabs and reorders partially
> > + * filled slabs to prioritise allocating from the mostly full slabs,
> > + * with the aim of reducing fragmentation.
> > + */
>
> This makes it sound like it would be a very good thing in general.
>
> > + kmem_cache_shrink(i915->priorities);
> > + kmem_cache_shrink(i915->dependencies);
> > + kmem_cache_shrink(i915->requests);
> > + kmem_cache_shrink(i915->luts);
> > + kmem_cache_shrink(i915->vmas);
> > + kmem_cache_shrink(i915->objects);
> > +}
> > +
> > static inline bool
> > new_requests_since_last_retire(const struct drm_i915_private *i915)
> > {
> > @@ -4796,6 +4811,13 @@ i915_gem_idle_work_handler(struct work_struct *work)
> > GEM_BUG_ON(!dev_priv->gt.awake);
> > i915_queue_hangcheck(dev_priv);
> > }
> > +
> > + rcu_barrier();
>
> Ugh, more sprinkled around complexity we add the more difficult it
> becomes to maintain the code base for mere mortals. At the very minimum
> a comment is needed here.
This one is because some of our kmem caches (i.e. requests) are special
and use TYPESAFE_BY_RCU which means we don't release the pages until
after a RCU grace period. This is just to encourage that we have a grace
period between each idle event. Though it looks a sensible to tie in a
grace period around kmem_cache_shrink, it really only takes effect after
so this is just to ensure the pages we used last time are given back.
> What about activity other than requests? Like active mmap_gtt, that
> might at least create busyness on the vma and object caches and is not
> correlated to idle work handler firing.
We only have a single periodic ticker atm... Patches to add a similar ticker
for GTT mmaps to coordinate with rpm etc lack some love.
-Chris
More information about the Intel-gfx
mailing list