[PATCH v2 1/2] drm: Add GPU reset sysfs event

Christian König christian.koenig at amd.com
Mon Mar 21 09:30:05 UTC 2022


Am 18.03.22 um 16:12 schrieb Rob Clark:
> On Fri, Mar 18, 2022 at 12:42 AM Christian König
> <christian.koenig at amd.com> wrote:
>> Am 17.03.22 um 18:31 schrieb Rob Clark:
>>> On Thu, Mar 17, 2022 at 10:27 AM Daniel Vetter <daniel at ffwll.ch> wrote:
>>>> [SNIP]
>>>>> (At some point, I'd like to use scheduler for the replay, and actually
>>>>> use drm_sched_stop()/etc.. but last time I looked there were still
>>>>> some sched bugs in that area which prevented me from deleting a bunch
>>>>> of code ;-))
>>>> Not sure about your hw, but at least on intel replaying tends to just
>>>> result in follow-on fun. And that holds even more so the more complex a
>>>> workload is. This is why vk just dies immediately and does not try to
>>>> replay anything, offloading it to the app. Same with arb robusteness.
>>>> Afaik it's really only media and classic gl which insist that the driver
>>>> stack somehow recover.
>>> At least for us, each submit must be self-contained (ie. not rely on
>>> previous GPU hw state), so in practice replay works out pretty well.
>>> The worst case is subsequent submits from same process fail as well
>>> (if they depended on something that crashing submit failed to write
>>> back to memory.. but in that case they just crash as well and we move
>>> on to the next one.. the recent gens (a5xx+ at least) are pretty good
>>> about quickly detecting problems and giving us an error irq.
>> Well I absolutely agree with Daniel.
>>
>> The whole replay thing AMD did in the scheduler is an absolutely mess
>> and should probably be killed with fire.
>>
>> I strongly recommend not to do the same mistake in other drivers.
>>
>> If you want to have some replay feature then please make it driver
>> specific and don't use anything from the infrastructure in the DRM
>> scheduler.
> hmm, perhaps I was not clear, but I'm only talking about re-emitting
> jobs *following* the faulting one (which could be from other contexts,
> etc).. not trying to restart the faulting job.
>
> You *absolutely* need to replay jobs following the faulting one, they
> could be from unrelated contexts/processes.  You can't just drop them
> on the floor.

Well you can, it just means that their contexts are lost as well.

If you re-submit jobs which were already pushed to the hardware you 
absolutely need to make a couple of things sure:

1. Don't race with your hardware. E.g. you need a way to stop processing 
in case of a timeout and then double check once more if things haven't 
finished in the meantime.

2. Make absolutely sure you never re-submit an operation when it's 
dma-fence is already signaled. Otherwise you run into memory corruption.

3. When you have multiple engines it becomes really tricky because then 
even innocent jobs might have already been started on different queues 
which now hang.

> Currently it is all driver specific, but I wanted to delete a lot of
> code and move to using scheduler to handle faults/timeouts (but
> blocked on that until [1] is resolved)

Please don't.

Especially don't use the pending_list or any of the scheduler 
infrastructure for GPU reset. We need to get rid of that again sooner or 
later.

This is extremely hardware dependent and pushing the amdgpu specific 
handling into the GPU scheduler was a mistake we shouldn't repeat for 
other drivers.

Regards,
Christian.

>
> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.kernel.org%2Fproject%2Fdri-devel%2Fpatch%2F1630457207-13107-2-git-send-email-Monk.Liu%40amd.com%2F&data=04%7C01%7Cchristian.koenig%40amd.com%7C1f6ddc253f9341231fa108da08f1afa9%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637832131381866493%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=e%2F1tOh3nxH3QfzKQKiJKjCU7Z5S6haX07F8rzwZhRVY%3D&reserved=0
>
> BR,
> -R
>
>> Thanks,
>> Christian.
>>
>>> BR,
>>> -R
>>>
>>>> And recovering from a mess in userspace is a lot simpler than trying to
>>>> pull of the same magic in the kernel. Plus it also helps with a few of the
>>>> dma_fence rules, which is a nice bonus.
>>>> -Daniel
>>>>



More information about the dri-devel mailing list