[Intel-gfx] [PATCH 4/5] drm/i915: Disable semaphore busywaits on saturated systems

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Apr 30 08:55:59 UTC 2019


On 29/04/2019 19:00, Chris Wilson wrote:
> Asking the GPU to busywait on a memory address, perhaps not unexpectedly
> in hindsight for a shared system, leads to bus contention that affects
> CPU programs trying to concurrently access memory. This can manifest as
> a drop in transcode throughput on highly over-saturated workloads.
> 
> The only clue offered by perf, is that the bus-cycles (perf stat -e
> bus-cycles) jumped by 50% when enabling semaphores. This corresponds
> with extra CPU active cycles being attributed to intel_idle's mwait.
> 
> This patch introduces a heuristic to try and detect when more than one
> client is submitting to the GPU pushing it into an oversaturated state.
> As we already keep track of when the semaphores are signaled, we can
> inspect their state on submitting the busywait batch and if we planned
> to use a semaphore but were too late, conclude that the GPU is
> overloaded and not try to use semaphores in future requests. In
> practice, this means we optimistically try to use semaphores for the
> first frame of a transcode job split over multiple engines, and fail is
> there are multiple clients active and continue not to use semaphores for
> the subsequent frames in the sequence. Periodically, trying to
> optimistically switch semaphores back on whenever the client waits to
> catch up with the transcode results.
> 

[snipped long benchmark results]

> Indicating that we've recovered the regression from enabling semaphores
> on this saturated setup, with a hint towards an overall improvement.
> 
> Very similar, but of smaller magnitude, results are observed on both
> Skylake(gt2) and Kabylake(gt4). This may be due to the reduced impact of
> bus-cycles, where we see a 50% hit on Broxton, it is only 10% on the big
> core, in this particular test.
> 
> One observation to make here is that for a greedy client trying to
> maximise its own throughput, using semaphores is the right choice. It is
> only the holistic system-wide view that semaphores of one client
> impacts another and reduces the overall throughput where we would choose
> to disable semaphores.

Since we acknowledge problem is the shared nature of the iGPU, my 
concern is that we still cannot account for both partners here when 
deciding to omit semaphore emission. In other words we trade bus 
throughput for submission latency.

Assuming a light GPU task (in the sense of not oversubscribing, but with 
ping-pong inter-engine dependencies), simultaneous to a heavier CPU 
task, our latency improvement still imposes a performance penalty on the 
latter.

For instance a consumer level single stream transcoding session with CPU 
heavy part of the pipeline, or a CPU intensive game.

(Ideally we would need a bus saturation signal to feed into our logic, 
not just engine saturation. Which I don't think is possible.)

So I am still leaning towards being cautious and just abandoning 
semaphores for now.

Regards,

Tvrtko

> The most noticeable negactive impact this has is on the no-op
> microbenchmarks, which are also very notable for having no cpu bus load.
> In particular, this increases the runtime and energy consumption of
> gem_exec_whisper.
> 
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin at intel.com>
> Cc: Dmitry Ermilov <dmitry.ermilov at intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_context.c       |  2 ++
>   drivers/gpu/drm/i915/gt/intel_context_types.h |  3 ++
>   drivers/gpu/drm/i915/i915_request.c           | 28 ++++++++++++++++++-
>   3 files changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c
> index 1f1761fc6597..5b31e1e05ddd 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context.c
> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
> @@ -116,6 +116,7 @@ intel_context_init(struct intel_context *ce,
>   	ce->engine = engine;
>   	ce->ops = engine->cops;
>   	ce->sseu = engine->sseu;
> +	ce->saturated = 0;
>   
>   	INIT_LIST_HEAD(&ce->signal_link);
>   	INIT_LIST_HEAD(&ce->signals);
> @@ -158,6 +159,7 @@ void intel_context_enter_engine(struct intel_context *ce)
>   
>   void intel_context_exit_engine(struct intel_context *ce)
>   {
> +	ce->saturated = 0;
>   	intel_engine_pm_put(ce->engine);
>   }
>   
> diff --git a/drivers/gpu/drm/i915/gt/intel_context_types.h b/drivers/gpu/drm/i915/gt/intel_context_types.h
> index d5a7dbd0daee..963a312430e6 100644
> --- a/drivers/gpu/drm/i915/gt/intel_context_types.h
> +++ b/drivers/gpu/drm/i915/gt/intel_context_types.h
> @@ -13,6 +13,7 @@
>   #include <linux/types.h>
>   
>   #include "i915_active_types.h"
> +#include "intel_engine_types.h"
>   #include "intel_sseu.h"
>   
>   struct i915_gem_context;
> @@ -52,6 +53,8 @@ struct intel_context {
>   	atomic_t pin_count;
>   	struct mutex pin_mutex; /* guards pinning and associated on-gpuing */
>   
> +	intel_engine_mask_t saturated; /* submitting semaphores too late? */
> +
>   	/**
>   	 * active_tracker: Active tracker for the external rq activity
>   	 * on this intel_context object.
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 8cb3ed5531e3..2d429967f403 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -410,6 +410,26 @@ void __i915_request_submit(struct i915_request *request)
>   	if (i915_gem_context_is_banned(request->gem_context))
>   		i915_request_skip(request, -EIO);
>   
> +	/*
> +	 * Are we using semaphores when the gpu is already saturated?
> +	 *
> +	 * Using semaphores incurs a cost in having the GPU poll a
> +	 * memory location, busywaiting for it to change. The continual
> +	 * memory reads can have a noticeable impact on the rest of the
> +	 * system with the extra bus traffic, stalling the cpu as it too
> +	 * tries to access memory across the bus (perf stat -e bus-cycles).
> +	 *
> +	 * If we installed a semaphore on this request and we only submit
> +	 * the request after the signaler completed, that indicates the
> +	 * system is overloaded and using semaphores at this time only
> +	 * increases the amount of work we are doing. If so, we disable
> +	 * further use of semaphores until we are idle again, whence we
> +	 * optimistically try again.
> +	 */
> +	if (request->sched.semaphores &&
> +	    i915_sw_fence_signaled(&request->semaphore))
> +		request->hw_context->saturated |= request->sched.semaphores;
> +
>   	/* We may be recursing from the signal callback of another i915 fence */
>   	spin_lock_nested(&request->lock, SINGLE_DEPTH_NESTING);
>   
> @@ -785,6 +805,12 @@ i915_request_await_start(struct i915_request *rq, struct i915_request *signal)
>   					     I915_FENCE_GFP);
>   }
>   
> +static intel_engine_mask_t
> +already_busywaiting(struct i915_request *rq)
> +{
> +	return rq->sched.semaphores | rq->hw_context->saturated;
> +}
> +
>   static int
>   emit_semaphore_wait(struct i915_request *to,
>   		    struct i915_request *from,
> @@ -798,7 +824,7 @@ emit_semaphore_wait(struct i915_request *to,
>   	GEM_BUG_ON(INTEL_GEN(to->i915) < 8);
>   
>   	/* Just emit the first semaphore we see as request space is limited. */
> -	if (to->sched.semaphores & from->engine->mask)
> +	if (already_busywaiting(to) & from->engine->mask)
>   		return i915_sw_fence_await_dma_fence(&to->submit,
>   						     &from->fence, 0,
>   						     I915_FENCE_GFP);
> 


More information about the Intel-gfx mailing list