[Intel-gfx] [PATCH v2 5/5] drm/i915: Start writeback from the shrinker
Chris Wilson
chris at chris-wilson.co.uk
Wed Jun 14 10:03:22 UTC 2017
Quoting Joonas Lahtinen (2017-06-13 15:07:04)
> On pe, 2017-06-09 at 12:03 +0100, Chris Wilson wrote:
> > When we are called to relieve mempressue via the shrinker, the only way
> > we can make progress is either by discarding unwanted pages (those
> > objects that userspace has marked MADV_DONTNEED) or by reclaiming the
> > dirty objects via swap. As we know that is the only way to make further
> > progress, we can initiate the writeback as we invalidate the objects.
> > This means the objects we put onto the inactive anon lru list are
> > already marked for reclaim+writeback and so will trigger a wait upon the
> > writeback inside direct reclaim, greatly improving the success rate of
> > direct reclaim on i915 objects.
> >
> > The corollary is that we may start a slow swap on opportunistic
> > mempressure from the likes of the compaction + migration kthreads. This
> > is limited by those threads only being allowed to shrink idle pages, but
> > also that if we reactivate the page before it is swapped out by gpu
> > activity, we only page the cost of repinning the page. The cost is most
> > felt when an object is reused after mempressure, which hopefully
> > excludes the latency sensitive tasks (as we are just extending the
> > impact of swap thrashing to them).
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > Cc: Matthew Auld <matthew.auld at intel.com>
> > Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> > Cc: Michal Hocko <mhocko at suse.com>
>
> <SNIP>
>
> > +static void __start_writeback(struct drm_i915_gem_object *obj)
> > +{
>
> <SNIP>
>
> > + /* Force any other users of this object to refault */
> > + mapping = obj->base.filp->f_mapping;
> > + unmap_mapping_range(mapping, 0, (loff_t)-1, 0);
> > +
> > + /* Begin writeback on each dirty page */
> > + for (i = 0; i < obj->base.size >> PAGE_SHIFT; i++) {
> > + struct page *page;
> > +
> > + page = find_lock_entry(mapping, i);
> > + if (!page || radix_tree_exceptional_entry(page))
> > + continue;
> > +
> > + if (!page_mapped(page) && clear_page_dirty_for_io(page)) {
> > + int ret;
> > +
> > + SetPageReclaim(page);
> > + ret = mapping->a_ops->writepage(page, &wbc);
> > + if (!PageWriteback(page))
> > + ClearPageReclaim(page);
> > + if (!ret)
> > + goto put;
> > + }
> > + unlock_page(page);
> > +put:
> > + put_page(page);
> > + }
>
> Apart from this part (which should probably be a helper function
> outside of i915), the code is:
>
> Reviewed-by: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
Thanks for the review, I've pushed the fix plus simple patches, leaving
this one for more feedback.
-Chris
More information about the Intel-gfx
mailing list