[PATCH v2 1/5] drm/i915: Fix request locking during error capture & debugfs dump

John Harrison john.c.harrison at intel.com
Wed Jan 18 17:55:42 UTC 2023


On 1/18/2023 08:22, Tvrtko Ursulin wrote:
> On 17/01/2023 21:36, John.C.Harrison at Intel.com wrote:
>> From: John Harrison <John.C.Harrison at Intel.com>
>>
>> When GuC support was added to error capture, the locking around the
>> request object was broken. Fix it up.
>>
>> The context based search manages the spinlocking around the search
>> internally. So it needs to grab the reference count internally as
>> well. The execlist only request based search relies on external
>> locking, so it needs an external reference count. So no change to that
>> code itself but the context version does change.
>>
>> The only other caller is the code for dumping engine state to debugfs.
>> That code wasn't previously getting an explicit reference at all as it
>> does everything while holding the execlist specific spinlock. So that
>> needs updaing as well as that spinlock doesn't help when using GuC
>> submission. Rather than trying to conditionally get/put depending on
>> submission model, just change it to always do the get/put.
>>
>> In addition, intel_guc_find_hung_context() was not acquiring the
>> correct spinlock before searching the request list. So fix that up too.
>>
>> Fixes: dc0dad365c5e ("drm/i915/guc: Fix for error capture after full 
>> GPU reset
>> with GuC")
>> Fixes: 573ba126aef3 ("drm/i915/guc: Capture error state on context 
>> reset")
>> Cc: Matthew Brost <matthew.brost at intel.com>
>> Cc: John Harrison <John.C.Harrison at Intel.com>
>> Cc: Jani Nikula <jani.nikula at linux.intel.com>
>> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
>> Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
>> Cc: Tvrtko Ursulin <tvrtko.ursulin at linux.intel.com>
>> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio at intel.com>
>> Cc: Andrzej Hajda <andrzej.hajda at intel.com>
>> Cc: Chris Wilson <chris at chris-wilson.co.uk>
>> Cc: Matthew Auld <matthew.auld at intel.com>
>> Cc: Matt Roper <matthew.d.roper at intel.com>
>> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
>> Cc: Michael Cheng <michael.cheng at intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
>> Cc: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay at intel.com>
>> Cc: Andy Shevchenko <andriy.shevchenko at linux.intel.com>
>> Cc: Aravind Iddamsetty <aravind.iddamsetty at intel.com>
>> Cc: Alan Previn <alan.previn.teres.alexis at intel.com>
>> Cc: Bruce Chang <yu.bruce.chang at intel.com>
>> Cc: intel-gfx at lists.freedesktop.org
>> Signed-off-by: John Harrison <John.C.Harrison at Intel.com>
>> ---
>>   drivers/gpu/drm/i915/gt/intel_context.c           |  1 +
>>   drivers/gpu/drm/i915/gt/intel_engine_cs.c         |  7 ++++++-
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 11 +++++++++++
>>   drivers/gpu/drm/i915/i915_gpu_error.c             |  5 ++---
>>   4 files changed, 20 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c 
>> b/drivers/gpu/drm/i915/gt/intel_context.c
>> index e94365b08f1ef..df64cf1954c1d 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_context.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_context.c
>> @@ -552,6 +552,7 @@ struct i915_request 
>> *intel_context_find_active_request(struct intel_context *ce)
>>             active = rq;
>>       }
>> +    active = i915_request_get_rcu(active);
>>       spin_unlock_irqrestore(&parent->guc_state.lock, flags);
>>         return active;
>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c 
>> b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> index 922f1bb22dc68..517d1fb7ae333 100644
>> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>> @@ -2236,10 +2236,13 @@ static void 
>> engine_dump_active_requests(struct intel_engine_cs *engine, struct d
>>       guc = intel_uc_uses_guc_submission(&engine->gt->uc);
>>       if (guc) {
>>           ce = intel_engine_get_hung_context(engine);
>> -        if (ce)
>> +        if (ce) {
>> +            /* This will reference count the request (if found) */
>>               hung_rq = intel_context_find_active_request(ce);
>> +        }
>>       } else {
>>           hung_rq = intel_engine_execlist_find_hung_request(engine);
>> +        hung_rq = i915_request_get_rcu(hung_rq);
>
> Looks like intel_engine_execlist_find_hung_request can return NULL 
> which i915_request_get_rcu will not handle.
Doh! That is correct.

>
> Maybe it would come up simpler if intel_context_find_active_request 
> wouldn't be getting the reference and then you can get one here at a 
> single place for both branches?
That would require moving the spinlock outside of 
intel_context_find_active_request so that it can be held while acquiring 
the request reference. And that means bleeding internal knowledge of 
which spinlock outside of the implementation and into the caller. As 
noted, the ideal would be extending the execlist implementation to do 
early tagging of the hung context/request at the point of hang 
detection. As opposed to rescanning the entire request list again at 
this point. And that will mean the lock being used inside 
'context_find_active' would be dependent upon GuC vs execlist backend. 
Which is an implementation detail we really should not be leaking out to 
the caller.

IMHO, it would be better to refactor engine_dump_active_requests() to 
acquire the sched_engine spinlock internally and only around the code 
which actually needs it (some of which is maybe execlist specific and 
not valid with GuC submission?). Certainly grabbing two independent 
spinlocks in a nested manner is not a good idea when there is no reason 
to do so.

John.

>
>>       }
>>         if (hung_rq)
>> @@ -2250,6 +2253,8 @@ static void engine_dump_active_requests(struct 
>> intel_engine_cs *engine, struct d
>>       else
>> intel_engine_dump_active_requests(&engine->sched_engine->requests,
>>                             hung_rq, m);
>> +    if (hung_rq)
>> +        i915_request_put(hung_rq);
>>   }
>>     void intel_engine_dump(struct intel_engine_cs *engine,
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index b436dd7f12e42..3b34a82d692be 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -4820,6 +4820,8 @@ void intel_guc_find_hung_context(struct 
>> intel_engine_cs *engine)
>>         xa_lock_irqsave(&guc->context_lookup, flags);
>>       xa_for_each(&guc->context_lookup, index, ce) {
>> +        bool found;
>> +
>>           if (!kref_get_unless_zero(&ce->ref))
>>               continue;
>>   @@ -4836,10 +4838,18 @@ void intel_guc_find_hung_context(struct 
>> intel_engine_cs *engine)
>>                   goto next;
>>           }
>>   +        found = false;
>> +        spin_lock(&ce->guc_state.lock);
>>           list_for_each_entry(rq, &ce->guc_state.requests, sched.link) {
>>               if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE)
>>                   continue;
>>   +            found = true;
>> +            break;
>> +        }
>> +        spin_unlock(&ce->guc_state.lock);
>> +
>> +        if (found) {
>>               intel_engine_set_hung_context(engine, ce);
>>                 /* Can only cope with one hang at a time... */
>> @@ -4847,6 +4857,7 @@ void intel_guc_find_hung_context(struct 
>> intel_engine_cs *engine)
>>               xa_lock(&guc->context_lookup);
>>               goto done;
>>           }
>> +
>>   next:
>>           intel_context_put(ce);
>>           xa_lock(&guc->context_lookup);
>
> This hunk I have to leave for someone who know the GuC backend well.
>
> Regards,
>
> Tvrtko
>
>> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c 
>> b/drivers/gpu/drm/i915/i915_gpu_error.c
>> index 9d5d5a397b64e..4107a0dfcca7d 100644
>> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
>> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
>> @@ -1607,6 +1607,7 @@ capture_engine(struct intel_engine_cs *engine,
>>       ce = intel_engine_get_hung_context(engine);
>>       if (ce) {
>>           intel_engine_clear_hung_context(engine);
>> +        /* This will reference count the request (if found) */
>>           rq = intel_context_find_active_request(ce);
>>           if (!rq || !i915_request_started(rq))
>>               goto no_request_capture;
>> @@ -1618,13 +1619,11 @@ capture_engine(struct intel_engine_cs *engine,
>>           if (!intel_uc_uses_guc_submission(&engine->gt->uc)) {
>> spin_lock_irqsave(&engine->sched_engine->lock, flags);
>>               rq = intel_engine_execlist_find_hung_request(engine);
>> +            rq = i915_request_get_rcu(rq);
>> spin_unlock_irqrestore(&engine->sched_engine->lock,
>>                              flags);
>>           }
>>       }
>> -    if (rq)
>> -        rq = i915_request_get_rcu(rq);
>> -
>>       if (!rq)
>>           goto no_request_capture;



More information about the dri-devel mailing list