[Intel-gfx] [PATCH 31/43] drm/i915/bdw: Handle context switch events

Daniel Vetter daniel at ffwll.ch
Thu Aug 14 22:28:47 CEST 2014


On Thu, Jul 24, 2014 at 05:04:39PM +0100, Thomas Daniel wrote:
> Handle all context status events in the context status buffer on every
> context switch interrupt. We only remove work from the execlist queue
> after a context status buffer reports that it has completed and we only
> attempt to schedule new contexts on interrupt when a previously submitted
> context completes (unless no contexts are queued, which means the GPU is
> free).
> 
> We canot call intel_runtime_pm_get() in an interrupt (or with a spinlock
> grabbed, FWIW), because it might sleep, which is not a nice thing to do.
> Instead, do the runtime_pm get/put together with the create/destroy request,
> and handle the forcewake get/put directly.
> 
> Signed-off-by: Thomas Daniel <thomas.daniel at intel.com>
> 
> v2: Unreferencing the context when we are freeing the request might free
> the backing bo, which requires the struct_mutex to be grabbed, so defer
> unreferencing and freeing to a bottom half.
> 
> v3:
> - Ack the interrupt inmediately, before trying to handle it (fix for
> missing interrupts by Bob Beckett <robert.beckett at intel.com>).
> - Update the Context Status Buffer Read Pointer, just in case (spotted
> by Damien Lespiau).
> 
> v4: New namespace and multiple rebase changes.
> 
> v5: Squash with "drm/i915/bdw: Do not call intel_runtime_pm_get() in an
> interrupt", as suggested by Daniel.
> 
> Signed-off-by: Oscar Mateo <oscar.mateo at intel.com>

> +static void execlists_free_request_task(struct work_struct *work)
> +{
> +	struct intel_ctx_submit_request *req =
> +			container_of(work, struct intel_ctx_submit_request, work);
> +	struct drm_device *dev = req->ring->dev;
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +
> +	intel_runtime_pm_put(dev_priv);
> +
> +	mutex_lock(&dev->struct_mutex);
> +	i915_gem_context_unreference(req->ctx);
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	kfree(req);
> +}

Latching a work item simply for the unference of the context looks very
fishy. The context really can't possible disappear before the last request
on it has completed, since the request already holds a reference.

That you have this additional reference here makes it look a bit like the
relationship and lifetime rules for the execlist queue item is misshapen.

I'd have expected:
- A given execlist queue item is responsible for a list of requests (one
  or more)
- Each request already holds onto the context.

The other thing that's strange here is the runtime pm handling. We already
keep the device awake as long as it's busy, so I wonder why exactly we
need this here in addition.

In any case these kind of cleanup tasks are imo better done in the retire
request handler that we already have.

Imo needs some cleanup.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch



More information about the Intel-gfx mailing list