[PATCH libdrm] amdgpu: add amdgpu_bo_handle_type_kms_noimport

Christian König christian.koenig at amd.com
Tue Jul 17 08:35:15 UTC 2018


Am 17.07.2018 um 10:30 schrieb Michel Dänzer:
> On 2018-07-17 10:19 AM, Christian König wrote:
>> Am 17.07.2018 um 10:03 schrieb Michel Dänzer:
>>> On 2018-07-17 09:59 AM, Christian König wrote:
>>>> Am 17.07.2018 um 09:46 schrieb Michel Dänzer:
>>>>> On 2018-07-17 09:33 AM, Christian König wrote:
>>>>>> Am 17.07.2018 um 09:26 schrieb Michel Dänzer:
>>>>>> [SNIP]
>>>>> All that should be needed is one struct list_head per BO, 16 bytes on
>>>>> 64-bit.
>>>> +malloc overhead and that for *every* BO the application/driver
>>>> allocated.
>>> The struct list_head can be stored in struct amdgpu_bo, no additional
>>> malloc necessary.
>> Well that sounds we are not talking about the same code, do we?
>>
>> IIRC the hashtable implementation in libdrm is using an ever growing
>> array for the BOs and *NOT* a linked list.
> So let's use something more suitable, e.g.:
>
> An array of 2^n struct list_head in struct amdgpu_device for the hash
> buckets. The BO's handle is hashed to the bucket number
>
>   handle & (2^n - 1)
>
> and linked in there via struct list_head in struct amdgpu_bo.
> amdgpu_bo_alloc and amdgpu_create_bo_from_user_mem add the handle at the
> end of the list, amdgpu_bo_import adds it at or moves it to the beginning.

Yeah, that would certainly reduce the problem quite a bit and would 
allow us to get rid of the util_hash* implementation which to me always 
seemed to be a bit overkill.

I actually don't see a reason why amdgpu_create_bo_from_user_mem() 
should add the handle at all, those BOs are not exportable.

Christian.


More information about the amd-gfx mailing list