[PATCH v4 09/14] drm/panthor: Add the heap logical block

Steven Price steven.price at arm.com
Thu Feb 15 09:34:53 UTC 2024


On 14/02/2024 17:33, Boris Brezillon wrote:
> On Mon, 12 Feb 2024 11:40:55 +0000
> Steven Price <steven.price at arm.com> wrote:
> 
>> On 22/01/2024 16:30, Boris Brezillon wrote:
>>> Tiler heap growing requires some kernel driver involvement: when the
>>> tiler runs out of heap memory, it will raise an exception which is
>>> either directly handled by the firmware if some free heap chunks are
>>> available in the heap context, or passed back to the kernel otherwise.
>>> The heap helpers will be used by the scheduler logic to allocate more
>>> heap chunks to a heap context, when such a situation happens.
>>>
>>> Heap context creation is explicitly requested by userspace (using
>>> the TILER_HEAP_CREATE ioctl), and the returned context is attached to a
>>> queue through some command stream instruction.
>>>
>>> All the kernel does is keep the list of heap chunks allocated to a
>>> context, so they can be freed when TILER_HEAP_DESTROY is called, or
>>> extended when the FW requests a new chunk.
>>>
>>> v4:
>>> - Rework locking to allow concurrent calls to panthor_heap_grow()
>>> - Add a helper to return a heap chunk if we couldn't pass it to the
>>>   FW because the group was scheduled out
>>>
>>> v3:
>>> - Add a FIXME for the heap OOM deadlock
>>> - Use the panthor_kernel_bo abstraction for the heap context and heap
>>>   chunks
>>> - Drop the panthor_heap_gpu_ctx struct as it is opaque to the driver
>>> - Ensure that the heap context is aligned to the GPU cache line size
>>> - Minor code tidy ups
>>>
>>> Co-developed-by: Steven Price <steven.price at arm.com>
>>> Signed-off-by: Steven Price <steven.price at arm.com>
>>> Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>  
>>
>> It looks fine, but there's a confusing FIXME comment:
>>
>>> +	/* FIXME: panthor_alloc_heap_chunk() triggers a kernel BO creation, which
>>> +	 * relies on blocking allocations (both for the BO itself, and backing
>>> +	 * memory), which might cause a deadlock because we're called from a context
>>> +	 * where we hold the panthor scheduler lock, thus preventing job cleanups
>>> +	 * that could free up some memory. The jobs themselves will timeout, but
>>> +	 * we'll still be blocked there. The only solution here is to implement
>>> +	 * something similar to shmem_sg_alloc_table() in i915, so we can do
>>> +	 * non-blocking allocations, and just kill the job when we run out-of-memory
>>> +	 * for the tiler context.
>>> +	 */  
>>
>> Whereas at the call site (group_process_tiler_oom()) there's the comment:
>>
>>> 		/* We do the allocation without holding the scheduler lock to avoid
>>> 		 * blocking the scheduling.
>>> 		 */  
>>
>> AFAICT that FIXME comment can just be deleted now. Assuming you agree
>> then with that change:
> 
> The FIXME is no longer accurate indeed, but I'd like to keep a FIXME
> here to reflect the fact the solution we have is not the one we want, as
> it prevents the GPU from immediately falling back to the user provided
> OOM exception handler, or killing the job if there's no way it can
> reclaim tiler memory.
> 
> How about:
> 
> FIXME: panthor_alloc_heap_chunk() triggers a kernel BO creation, which
> goes through the blocking allocation path. Ultimately, we want
> a non-blocking allocation, so we can immediately report to the FW when
> the system is running out of memory. In that case, the FW can call a
> user-provided exception handler, which might try to free some tiler
> memory by issuing an intermediate fragment job. If the exception handler
> can't do anything, it will flag the queue as faulty so the job that
> triggered this tiler chunk allocation and all further jobs in this
> queue fail immediately instead of having to wait for the job
> timeout.

Sounds good - it was mostly the talk of deadlock that worried me. The
code as it stands is good enough for merging (as it's not going to
deadlock the kernel) but we can obviously look to improve the low-memory
behaviour in future.

Steve

>>
>> Reviewed-by: Steven Price <steven.price at arm.com>



More information about the dri-devel mailing list