[Intel-gfx] [PATCH v2] drm/i915: Priority boost for locked waits

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Jan 19 06:18:56 UTC 2017


On 18/01/2017 16:53, Chris Wilson wrote:
> We always try to do an unlocked wait before resorting to having a
> blocking wait under the mutex - so we very rarely have to sleep under
> the struct_mutex. However, when we do we want that wait to be as short
> as possible as the struct_mutex is our BKL that will stall the driver and
> all clients.
>
> There should be no impact for all typical workloads.
>
> v2: Move down a layer to apply to all waits.
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> ---
>  drivers/gpu/drm/i915/i915_gem_request.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index bacb875a6ef3..7be17d9c304b 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -1054,6 +1054,15 @@ long i915_wait_request(struct drm_i915_gem_request *req,
>  	if (!timeout)
>  		return -ETIME;
>
> +	/* Very rarely do we wait whilst holding the mutex. We try to always
> +	 * do an unlocked wait before using a locked wait. However, when we
> +	 * have to resort to a locked wait, we want that wait to be as short
> +	 * as possible as the struct_mutex is our BKL that will stall the
> +	 * driver and all clients.
> +	 */
> +	if (flags & I915_WAIT_LOCKED && req->engine->schedule)
> +		req->engine->schedule(req, I915_PRIORITY_MAX);
> +
>  	trace_i915_gem_request_wait_begin(req);
>
>  	add_wait_queue(&req->execute, &exec);
>

Would it be worth moving it to after the wait_begin tracepoint? Just 
thinking to maybe account as much as the time spent between wait_begin 
and wait_end in case someone would be looking at that.

I did not like the locked wait from i915_gem_gtt_finish_pages triggering 
the priority bump but it seem irrelevant on platforms with the scheduler.

Another concern is set cache level ioctl which does not have an unlocked 
wait first so might be used for illegal priority bumping?

Also on the VMA unbind path, so everything which can go through there 
first should be evaluated.

Regards,

Tvrtko


More information about the Intel-gfx mailing list