[Intel-gfx] [PATCH 1/4] drm/i915: Unify execlist and legacy request life-cycles

Chris Wilson chris at chris-wilson.co.uk
Fri Oct 9 02:45:35 PDT 2015


On Fri, Oct 09, 2015 at 11:15:08AM +0200, Daniel Vetter wrote:
> My idea was to create a new request for 3. which gets signalled by the
> scheduler in intel_lrc_irq_handler. My idea was that we'd only create
> these when a ctx switch might occur to avoid overhead, but I guess if we
> just outright delay all requests a notch if need that might work too. But
> I'm really not sure on the implications of that (i.e. does the hardware
> really unlod the ctx if it's idle?), and whether that would fly still with
> the scheduler.
>
> But figuring this one out here seems to be the cornestone of this reorg.
> Without it we can't just throw contexts onto the active list.

(Let me see if I understand it correctly)

Basically the problem is that we can't trust the context object to be
synchronized until after the status interrupt. The way we handled that
for legacy is to track the currently bound context and keep the
vma->pin_count asserted until the request containing the switch away.
Doing the same for execlists would trivially fix the issue and if done
smartly allows us to share more code (been there, done that).

That satisfies me for keeping requests as a basic fence in the GPU
timeline and should keep everyone happy that the context can't vanish
until after it is complete. The only caveat is that we cannot evict the
most recent context. For legacy, we do a switch back to the always
pinned default context. For execlists we don't, but it still means we
should only have one context which cannot be evicted (like legacy). But
it does leave us with the issue that i915_gpu_idle() returns early and
i915_gem_context_fini() must keep the explicit gpu reset to be
absolutely sure that the pending context writes are completed before the
final context is unbound.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list