[Intel-gfx] [PATCH 11/41] drm/i915: Introduce an internal allocator for disposable private objects

Chris Wilson chris at chris-wilson.co.uk
Mon Oct 17 09:55:47 UTC 2016


On Mon, Oct 17, 2016 at 10:47:09AM +0100, Tvrtko Ursulin wrote:
> 
> On 14/10/2016 15:42, Chris Wilson wrote:
> >On Fri, Oct 14, 2016 at 03:35:59PM +0100, Tvrtko Ursulin wrote:
> >>On 14/10/2016 14:53, Chris Wilson wrote:
> >>>>>We do pass NORETRY | NOWARN for the higher order allocations, so it
> >>>>>shouldn't be as bad it seems?
> >>>>I don't know for sure without looking into the implementation
> >>>>details. But I assumed even with NORETRY it does some extra work to
> >>>>try and free up the space. And if it fails, and we ask for it again,
> >>>>it is just doing that extra work for nothing. Because within a
> >>>>single allocation it sounds unlikely that something would change so
> >>>>dramatically that it would start working.
> >>>iirc, NORETRY means abort after failure. In effect, it does
> >>>2 attempts from the freelist, a direct reclaim, and may then repeat
> >>>if the task's allowed set of nodes were concurrently changed.
> >>Do you think it makes sense doing all that after it started failing,
> >>within our single get_pages allocation?
> >I was thinking about skipping the DIRECT_RECLAIM for high order, but it
> >seems like that is beneficial for THP, so I'm presuming it should also
> >be for ourselves. Trimming back max_order seems sensible, but I still
> >like the idea of taking advantage of contiguous pages where possible
> >(primarily these will be used for ringbuffers and shadow batches).
> 
> Can we agree then to try larger order first and fall back gradually?

I thought we had already agreed on that! I thought we were looking at
what else we might do. :)
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list