[Intel-gfx] [RFC PATCH 5/7] drm/ttm: add range busy check for range manager

Christian König christian.koenig at amd.com
Thu Mar 17 07:00:11 UTC 2022


Am 16.03.22 um 16:28 schrieb Robert Beckett:
>
>
> On 16/03/2022 14:39, Christian König wrote:
>> Am 16.03.22 um 15:26 schrieb Robert Beckett:
>>>
>>> [SNIP]
>>> this is where I replace an existing range check via drm_mm with the 
>>> range check I added in this patch.
>>
>> Mhm, I still don't get the use case from the code, but I don't think 
>> it matters any more.
>>
>>>>> I suppose we could add another drm_mm range tracker just for 
>>>>> testing and shadow track each allocation in the range, but that 
>>>>> seemed like a lot of extra infrastructure for no general runtime use.
>>>>
>>>> I have no idea what you mean with that.
>>>
>>> I meant as a potential solution to tracking allocations without a 
>>> range check, we would need to add something external. e.g. adding a 
>>> shadow drm_mm range tracker, or a bitmask across the range, or stick 
>>> objects in a list etc.
>>
>> Ah! So you are trying to get access to the drm_mm inside the 
>> ttm_range_manager and not add some additional range check function! 
>> Now I got your use case.
>
> well, specifically I was trying to avoid having to get access to the 
> drm_mm.
> I wanted to maintain an abstract interface at the resource manager 
> level, hence the rfc to ask if we could add a range check to 
> ttm_resource_manager_func.
>
> I don't like the idea of code external to ttm having to poke in to the 
> implementation details of the manager to get it's underlying drm_mm.

The purpose of the ttm_range_manager is to implement a base class which 
is then extended by the drivers with more explicit functionality.

I have it on my TODO list to properly export the ttm_range_manager 
functions and use them to simplify the amdgpu_gtt_mgr.c implementation.

So accessing the drm_mm for a test case sounds perfectly fine to me as 
long as you document what is happening. E.g. maybe add a wrapper 
function to get a pointer to the drm_mm.

>
>>
>>>>> would you mind explaining the rationale for removing range checks? 
>>>>> It seems to me like a natural fit for a memory manager
>>>>
>>>> TTM manages buffer objects and resources, not address space. The 
>>>> lpfn/fpfn parameter for the resource allocators are actually used 
>>>> as just two independent parameters and not define any range. We 
>>>> just keep the names for historical reasons.
>>>>
>>>> The only places we still use and compare them as ranges are 
>>>> ttm_resource_compat() and ttm_bo_eviction_valuable() and I already 
>>>> have patches to clean up those and move them into the backend 
>>>> resource handling.
>>>
>>> except the ttm_range_manager seems to still use them as a range 
>>> specifier.
>>
>> Yeah, because the range manager is the backend which handles ranges 
>> using the drm_mm :)
>>
>>> If the general design going forward is to not consider ranges, how 
>>> would you recommend constructing buffers around pre-allocated 
>>> regions e.g. uefi frame buffers who's range is dictated externally?
>>
>> Call ttm_bo_mem_space() with the fpfn/lpfn filled in as required. See 
>> function amdgpu_bo_create_kernel_at() for an example.
>
> ah, I see, thanks.
>
> To allow similar code to before, which was conceptually just trying to 
> see if a range was currently free, would you be okay with a new 
> ttm_bo_mem_try_space, which does not do the force to evict, but 
> instead returns -EBUSY?

You can already do that by setting the num_busy_placement to zero. That 
should prevent any eviction.

Regards,
Christian.


>
> If so, the test can try to alloc, and immediately free if successful 
> which would imply it was free.
>
>>
>> Regards,
>> Christian.
>>
>>>
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Robert Beckett <bob.beckett at collabora.com>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/ttm/ttm_range_manager.c | 21 
>>>>>>> +++++++++++++++++++++
>>>>>>>   include/drm/ttm/ttm_range_manager.h     |  3 +++
>>>>>>>   2 files changed, 24 insertions(+)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c 
>>>>>>> b/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> index 8cd4f3fb9f79..5662627bb933 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> @@ -206,3 +206,24 @@ int ttm_range_man_fini_nocheck(struct 
>>>>>>> ttm_device *bdev,
>>>>>>>       return 0;
>>>>>>>   }
>>>>>>>   EXPORT_SYMBOL(ttm_range_man_fini_nocheck);
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * ttm_range_man_range_busy - Check whether anything is 
>>>>>>> allocated with a range
>>>>>>> + *
>>>>>>> + * @man: memory manager to check
>>>>>>> + * @fpfn: first page number to check
>>>>>>> + * @lpfn: last page number to check
>>>>>>> + *
>>>>>>> + * Return: true if anything allocated within the range, false 
>>>>>>> otherwise.
>>>>>>> + */
>>>>>>> +bool ttm_range_man_range_busy(struct ttm_resource_manager *man,
>>>>>>> +                  unsigned fpfn, unsigned lpfn)
>>>>>>> +{
>>>>>>> +    struct ttm_range_manager *rman = to_range_manager(man);
>>>>>>> +    struct drm_mm *mm = &rman->mm;
>>>>>>> +
>>>>>>> +    if (__drm_mm_interval_first(mm, PFN_PHYS(fpfn), 
>>>>>>> PFN_PHYS(lpfn + 1) - 1))
>>>>>>> +        return true;
>>>>>>> +    return false;
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(ttm_range_man_range_busy);
>>>>>>> diff --git a/include/drm/ttm/ttm_range_manager.h 
>>>>>>> b/include/drm/ttm/ttm_range_manager.h
>>>>>>> index 7963b957e9ef..86794a3f9101 100644
>>>>>>> --- a/include/drm/ttm/ttm_range_manager.h
>>>>>>> +++ b/include/drm/ttm/ttm_range_manager.h
>>>>>>> @@ -53,4 +53,7 @@ static __always_inline int 
>>>>>>> ttm_range_man_fini(struct ttm_device *bdev,
>>>>>>>       BUILD_BUG_ON(__builtin_constant_p(type) && type >= 
>>>>>>> TTM_NUM_MEM_TYPES);
>>>>>>>       return ttm_range_man_fini_nocheck(bdev, type);
>>>>>>>   }
>>>>>>> +
>>>>>>> +bool ttm_range_man_range_busy(struct ttm_resource_manager *man,
>>>>>>> +                  unsigned fpfn, unsigned lpfn);
>>>>>>>   #endif
>>>>>>
>>>>
>>



More information about the Intel-gfx mailing list