[Intel-gfx] [PATCH 1/5] drm/i915/gt: Wait for new requests in intel_gt_retire_requests()

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Fri Nov 15 12:45:52 UTC 2019


On 14/11/2019 22:57, Chris Wilson wrote:
> Our callers fall into two categories, those passing timeout=0 who just
> want to flush request retirements and those passing a timeout that need
> to wait for submission completion (e.g. intel_gt_wait_for_idle()).
> Currently, we only wait for a snapshot of timelines at the start of the
> wait (but there was an expection that new requests would cause timelines

expectation? exception?

> to appear at the end). However, our callers, such as
> intel_gt_wait_for_idle() before suspend, do require us to wait for the
> power management requests emitted by retirement as well. If we don't,
> then it takes an extra second or two for the background worker to flush
> the queue and mark the GT as idle.

So with this change wait_for_idle waits for the kernel context to get 
retired as well. And you say that's faster by a second or two? Which 
flush gets so much slower, I mean from where, if we don't wait here?

Regards,

Tvrtko

> 
> Fixes: 7e8057626640 ("drm/i915: Drop struct_mutex from around i915_retire_requests()")
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_gt_requests.c | 11 +++--------
>   1 file changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_requests.c b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> index b73229a84d85..ccbddddbbd52 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_requests.c
> @@ -33,7 +33,6 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
>   {
>   	struct intel_gt_timelines *timelines = &gt->timelines;
>   	struct intel_timeline *tl, *tn;
> -	unsigned long active_count = 0;
>   	unsigned long flags;
>   	bool interruptible;
>   	LIST_HEAD(free);
> @@ -46,10 +45,8 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
>   
>   	spin_lock_irqsave(&timelines->lock, flags);
>   	list_for_each_entry_safe(tl, tn, &timelines->active_list, link) {
> -		if (!mutex_trylock(&tl->mutex)) {
> -			active_count++; /* report busy to caller, try again? */
> +		if (!mutex_trylock(&tl->mutex))
>   			continue;
> -		}
>   
>   		intel_timeline_get(tl);
>   		GEM_BUG_ON(!tl->active_count);
> @@ -74,9 +71,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
>   
>   		/* Resume iteration after dropping lock */
>   		list_safe_reset_next(tl, tn, link);
> -		if (--tl->active_count)
> -			active_count += !!rcu_access_pointer(tl->last_request.fence);
> -		else
> +		if (!--tl->active_count)
>   			list_del(&tl->link);
>   
>   		mutex_unlock(&tl->mutex);
> @@ -92,7 +87,7 @@ long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout)
>   	list_for_each_entry_safe(tl, tn, &free, link)
>   		__intel_timeline_free(&tl->kref);
>   
> -	return active_count ? timeout : 0;
> +	return list_empty(&timelines->active_list) ? 0 : timeout;
>   }
>   
>   int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout)
> 


More information about the Intel-gfx mailing list