[PATCH] drm: round_up the size to the alignment value
Arunpravin Paneer Selvam
arunpravin.paneerselvam at amd.com
Wed Mar 30 20:44:55 UTC 2022
On 30/03/22 2:42 pm, Christian König wrote:
> Am 30.03.22 um 11:20 schrieb Arunpravin Paneer Selvam:
>>
>> On 30/03/22 2:37 pm, Christian König wrote:
>>> Am 30.03.22 um 11:04 schrieb Arunpravin Paneer Selvam:
>>>> Round up the size value to the min_page_size and trim the last block to
>>>> the required size.
>>>>
>>>> This solves a bug detected when size is not aligned with the min_page_size.
>>>> Unigine Heaven has allocation requests for example required pages are 257
>>>> and alignment request is 256. To allocate the left over 1 page, continues
>>>> the iteration to find the order value which is 0 and when it compares with
>>>> min_order = 8, triggers the BUG_ON(order < min_order). To avoid this issue
>>>> we round_up the size value to the min_page_size and trim the last block to
>>>> the computed required size value.
>>> Well, Matthew and you convinced me to *not* do it like this.
>>>
>>> Has that conclusion changed somehow?
>>>
>> Yes, now he is ok to handle rounding + trimming in drm buddy
>
> Yeah, but I'm no longer :)
>
> How do we then handle the detection of contiguous allocation?
>
> As I said we can do that like:
> 1. alloc
> 2. check if we only have a single node
I think verifying the list is a single node would allow all the power of
2 requests(1 page, 2 pages, 4 pages etc..) single node and CONTIGUOUS
flag not enabled cases entering into the trim function and simply return
since the original size == roundup_pow_of_2 size.
can we handle all the situation (alignment rounding trimming +
contiguous trimming) in a single if condition like below,
if (cur_size != (pages << PAGE_SHIFT))
where cur_size = stores the size value before round_up(alignment
rounding up) or round_pow_of_2 (contiguous rounding up)
pages = stores the size value after round_up(alignment rounding up) or
round_pow_of_2 (contiguous rounding up)
if there is a difference b/w these 2 numbers, we enter the trim block -
- For a single node, we pass the original size (contiguous trimming)
- For multiple node, we fetch the last block and trim the computed size
(alignment rounding trimming)
> 3. trim
>
> But if we include the trim here we can't do it any more.
>
> Only alternative would then be to inspect each node and see if it
> follows directly behind the predecessor.
>
ok. Therefore, we handle both contiguous allocation trimming and
alignment rounding up trimming (only last block) in amdgpu and i915 driver.
And, in drm buddy we just have a check to return -EINVAL if size is not
aligned to min_page_size.
If yes to above statements, I included alignment rounding up trimming
(only last block) in the same place where currently we trim for the
contiguous allocation. I will send the patch for review.
> Regards,
> Christian.
>
>>
>>> Regards,
>>> Christian.
>>>
>>>> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam at amd.com>
>>>> ---
>>>> drivers/gpu/drm/drm_buddy.c | 31 +++++++++++++++++++++++++++++++
>>>> 1 file changed, 31 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
>>>> index 72f52f293249..98d7ec359b08 100644
>>>> --- a/drivers/gpu/drm/drm_buddy.c
>>>> +++ b/drivers/gpu/drm/drm_buddy.c
>>>> @@ -641,6 +641,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
>>>> unsigned int min_order, order;
>>>> unsigned long pages;
>>>> LIST_HEAD(allocated);
>>>> + u64 cur_size;
>>>> int err;
>>>>
>>>> if (size < mm->chunk_size)
>>>> @@ -665,6 +666,11 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
>>>> if (start + size == end)
>>>> return __drm_buddy_alloc_range(mm, start, size, blocks);
>>>>
>>>> + cur_size = size;
>>>> +
>>>> + if (!IS_ALIGNED(size, min_page_size))
>>>> + size = round_up(size, min_page_size);
>>>> +
>>>> pages = size >> ilog2(mm->chunk_size);
>>>> order = fls(pages) - 1;
>>>> min_order = ilog2(min_page_size) - ilog2(mm->chunk_size);
>>>> @@ -702,6 +708,31 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
>>>> break;
>>>> } while (1);
>>>>
>>>> +
>>>> + /*
>>>> + * If size value rounded up to min_page_size, trim the last block
>>>> + * to the required size
>>>> + */
>>>> + if (cur_size != size) {
>>>> + struct drm_buddy_block *trim_block;
>>>> + LIST_HEAD(trim_list);
>>>> + u64 required_size;
>>>> +
>>>> + trim_block = list_last_entry(&allocated, typeof(*trim_block), link);
>>>> + list_move_tail(&trim_block->link, &trim_list);
>>>> + /*
>>>> + * Compute the required_size value by subtracting the last block size
>>>> + * with (aligned size - original size)
>>>> + */
>>>> + required_size = drm_buddy_block_size(mm, trim_block) - (size - cur_size);
>>>> +
>>>> + drm_buddy_block_trim(mm,
>>>> + required_size,
>>>> + &trim_list);
>>>> +
>>>> + list_splice_tail(&trim_list, &allocated);
>>>> + }
>>>> +
>>>> list_splice_tail(&allocated, blocks);
>>>> return 0;
>>>>
>>>>
>>>> base-commit: ec57376fba5abc0e571617ff88e2ade7970c2e4b
>
More information about the dri-devel
mailing list