[PATCH v2 8/8] drm/amdgpu: Prevent any job recoveries after device is unplugged.

Luben Tuikov luben.tuikov at amd.com
Wed Nov 18 00:46:01 UTC 2020


On 2020-11-17 2:49 p.m., Daniel Vetter wrote:
> On Tue, Nov 17, 2020 at 02:18:49PM -0500, Andrey Grodzovsky wrote:
>>
>> On 11/17/20 1:52 PM, Daniel Vetter wrote:
>>> On Tue, Nov 17, 2020 at 01:38:14PM -0500, Andrey Grodzovsky wrote:
>>>> On 6/22/20 5:53 AM, Daniel Vetter wrote:
>>>>> On Sun, Jun 21, 2020 at 02:03:08AM -0400, Andrey Grodzovsky wrote:
>>>>>> No point to try recovery if device is gone, just messes up things.
>>>>>>
>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
>>>>>> ---
>>>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 16 ++++++++++++++++
>>>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  8 ++++++++
>>>>>>    2 files changed, 24 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>> index 6932d75..5d6d3d9 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>> @@ -1129,12 +1129,28 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
>>>>>>    	return ret;
>>>>>>    }
>>>>>> +static void amdgpu_cancel_all_tdr(struct amdgpu_device *adev)
>>>>>> +{
>>>>>> +	int i;
>>>>>> +
>>>>>> +	for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
>>>>>> +		struct amdgpu_ring *ring = adev->rings[i];
>>>>>> +
>>>>>> +		if (!ring || !ring->sched.thread)
>>>>>> +			continue;
>>>>>> +
>>>>>> +		cancel_delayed_work_sync(&ring->sched.work_tdr);
>>>>>> +	}
>>>>>> +}
>>>>> I think this is a function that's supposed to be in drm/scheduler, not
>>>>> here. Might also just be your cleanup code being ordered wrongly, or your
>>>>> split in one of the earlier patches not done quite right.
>>>>> -Daniel
>>>>
>>>> This function iterates across all the schedulers  per amdgpu device and accesses
>>>> amdgpu specific structures , drm/scheduler deals with single scheduler at most
>>>> so looks to me like this is the right place for this function
>>> I guess we could keep track of all schedulers somewhere in a list in
>>> struct drm_device and wrap this up. That was kinda the idea.
>>>
>>> Minimally I think a tiny wrapper with docs for the
>>> cancel_delayed_work_sync(&sched->work_tdr); which explains what you must
>>> observe to make sure there's no race.
>>
>>
>> Will do
>>
>>
>>> I'm not exactly sure there's no
>>> guarantee here we won't get a new tdr work launched right afterwards at
>>> least, so this looks a bit like a hack.
>>
>>
>> Note that for any TDR work happening post amdgpu_cancel_all_tdr
>> amdgpu_job_timedout->drm_dev_is_unplugged
>> will return true and so it will return early. To make it water proof tight
>> against race
>> i can switch from drm_dev_is_unplugged to drm_dev_enter/exit
> 
> Hm that's confusing. You do a work_cancel_sync, so that at least looks
> like "tdr work must not run after this point"
> 
> If you only rely on drm_dev_enter/exit check with the tdr work, then
> there's no need to cancel anything.
> 
> For race free cancel_work_sync you need:
> 1. make sure whatever is calling schedule_work is guaranteed to no longer
> call schedule_work.
> 2. call cancel_work_sync
> 
> Anything else is cargo-culted work cleanup:
> 
> - 1. without 2. means if a work got scheduled right before it'll still be
>   a problem.
> - 2. without 1. means a schedule_work right after makes you calling
>   cancel_work_sync pointless.

This is sound advice and I did something similar for SAS over a decade
ago where an expander could be disconnected from the domain via which
many IOs are flying to end devices.

You need a small tiny DRM function which low-level drivers (such as amdgpu)
call in order to tell DRM that this device is not accepting commands
any more (sets a flag) and starts a thread to clean up commands
which are "done" or "incoming". At the same time, the low-level driver
returns commands which are pending in the hardware back out to
DRM (thus those commands become "done" from "pending"), and
DRM cleans them up.(*)

The point is that you're not bubbling up the error, but
directly notifying the highest level of upper layer to hold off,
while you're cleaning up all incoming and pending commands.

Depending on the situation, case 1 above has two sub-cases:

a) the device will not come back--then cancel any new work
   back out to the application client, or
b) the device may come back again, i.e. it is being reset,
   then you can queue up work, assuming the device will
   come back on successfully and you'd be able to send
   the incoming requests down to it. Or cancel everything
   and let the application client do the queueing and
   resubmission, like in a). The latter will not work when this
   resubmission (and error recovery) is done without
   the knowledge of the application client, for instance
   communication or parity errors, protocol retries, etc.

(*) I've some work coming in, in the scheduler, which could make
this handling easier, or at least set a mechanism by which
this could be made easier.

Regards,
Luben

> 
> So either both or nothing.
> -Daniel
> 
>>
>> Andrey
>>
>>
>>> -Daniel
>>>
>>>> Andrey
>>>>
>>>>
>>>>>> +
>>>>>>    static void
>>>>>>    amdgpu_pci_remove(struct pci_dev *pdev)
>>>>>>    {
>>>>>>    	struct drm_device *dev = pci_get_drvdata(pdev);
>>>>>> +	struct amdgpu_device *adev = dev->dev_private;
>>>>>>    	drm_dev_unplug(dev);
>>>>>> +	amdgpu_cancel_all_tdr(adev);
>>>>>>    	ttm_bo_unmap_virtual_address_space(&adev->mman.bdev);
>>>>>>    	amdgpu_driver_unload_kms(dev);
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>> index 4720718..87ff0c0 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>> @@ -28,6 +28,8 @@
>>>>>>    #include "amdgpu.h"
>>>>>>    #include "amdgpu_trace.h"
>>>>>> +#include <drm/drm_drv.h>
>>>>>> +
>>>>>>    static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>>>>>    {
>>>>>>    	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
>>>>>> @@ -37,6 +39,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>>>>>    	memset(&ti, 0, sizeof(struct amdgpu_task_info));
>>>>>> +	if (drm_dev_is_unplugged(adev->ddev)) {
>>>>>> +		DRM_INFO("ring %s timeout, but device unplugged, skipping.\n",
>>>>>> +					  s_job->sched->name);
>>>>>> +		return;
>>>>>> +	}
>>>>>> +
>>>>>>    	if (amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
>>>>>>    		DRM_ERROR("ring %s timeout, but soft recovered\n",
>>>>>>    			  s_job->sched->name);
>>>>>> -- 
>>>>>> 2.7.4
>>>>>>
> 



More information about the amd-gfx mailing list