[PATCH v3 0/8] drm: Introduce sparse GEM shmem

Christian König christian.koenig at amd.com
Fri Apr 11 13:13:26 UTC 2025


Am 11.04.25 um 15:00 schrieb Boris Brezillon:
> On Fri, 11 Apr 2025 14:45:49 +0200
> Christian König <christian.koenig at amd.com> wrote:
>
>> Am 11.04.25 um 14:02 schrieb Boris Brezillon:
>>>>> I guess this leaves older GPUs that don't support incremental
>>>>> rendering in a bad place though.    
>>>> Well what's the handling there currently? Just crash when you're
>>>> OOM?  
>>> It's "alloc(GFP_KERNEL) and crash if it fails or times out", yes.  
>> Oh, please absolutely don't! Using GFP_KERNEL here is as evil as it
>> can be.
> I'm not saying that's what we should do, I'm just telling you what's
> done at the moment. The whole point of this series is to address some
> that mess :P.

Then it is absolutely welcomed that you take a look at this :D

Oh my, how have we missed that previously?

>
>> Background is that you don't get a crash, nor error message, nor
>> anything indicating what is happening.
> The job times out at some point, but we might get stuck in the fault
> handler waiting for memory, which is pretty close to a deadlock, I
> suspect.

I don't know those drivers that well, but at least for amdgpu the problem would be that the timeout handling would need to grab some of the locks the memory management is holding waiting for the timeout handling to do something....

So that basically perfectly closes the circle. With a bit of lock you get a message after some time that the kernel is stuck, but since that are all sleeping locks I strongly doubt so.

As immediately action please provide patches which changes those GFP_KERNEL into GFP_NOWAIT.

Thanks in advance,
Christian.

>
>> You just get a deadlocked system with the user wondering why the heck
>> the system doesn't response any more.
> Not sure that's a true deadlock, for the reason explained before (no
> shrinker in panthor, and panfrost only reclaims idle BOs, so no waits
> on fences there either), but that doesn't make things great either.



More information about the lima mailing list