[Intel-gfx] [PATCH v2] drm/i915: Set our shrinker->batch to 4096 (~16MiB)

Chris Wilson chris at chris-wilson.co.uk
Fri Aug 18 22:48:54 UTC 2017


Quoting Chris Wilson (2017-08-18 13:56:08)
> Quoting Chris Wilson (2017-08-16 15:23:06)
> > Prefer to defer activating our GEM shrinker until we have a few
> > megabytes to free; or we have accumulated sufficient mempressure by
> > deferring the reclaim to force a shrink. The intent is that because our
> > objects may typically be large, we are too effective at shrinking and
> > are not rewarded for freeing more pages than the batch. It will also
> > defer the initial shrinking to hopefully put it at a lower priority than
> > say the buffer cache (although it will balance out over a number of
> > reclaims, with GEM being more bursty).
> > 
> > v2: Give it a feedback system to try and tune the batch size towards
> > an effective size for the available objects.
> > v3: Start keeping track of shrinker stats in debugfs
> 
> I think this is helping a treat. Still we get shrinker stalls, but
> (anecdotally sadly) they do not feel as bad. Hmm, I wonder if latencytop
> helps, but I also need a consistent workload+environment to replay.
> 
> One task fills the buffercache (-> vmpressure, triggering
> reclaim/kswapd), the other task does something simple like copying
> between a ring of buffers slightly too large for memory? Hmm, can wrap
> this is as a mode of gem_syslatency. Then we measure the latency of a
> third party to wakeup events? Or something engineered to hit the vm?

Hmm, this didn't make as big a difference (to the buffercache vs i915) as
I hoped, but

[RFC] mm,drm/i915: Mark pinned shmemfs pages as unevictable
https://patchwork.freedesktop.org/patch/160075/

did!
-Chris


More information about the Intel-gfx mailing list