[PATCH v2 4/7] drm/amdgpu: Fix out-of-bounds issue in user fence

Christian König christian.koenig at amd.com
Tue Dec 17 09:46:35 UTC 2024


Hi Arun,

Am 17.12.24 um 07:20 schrieb Paneer Selvam, Arunpravin:
> Hi Christian,
>
>
> On 12/13/2024 6:29 PM, Christian König wrote:
>> Am 13.12.24 um 12:24 schrieb Paneer Selvam, Arunpravin:
>>> Hi Christian,
>>>
>>>
>>> On 12/13/2024 4:13 PM, Christian König wrote:
>>>> Am 12.12.24 um 15:25 schrieb Arunpravin Paneer Selvam:
>>>>> Fix out-of-bounds issue in userq fence create when
>>>>> accessing the userq xa structure. Added a lock to
>>>>> protect the race condition.
>>>>>
>>>>> BUG: KASAN: slab-out-of-bounds in 
>>>>> amdgpu_userq_fence_create+0x726/0x880 [amdgpu]
>>>>> [  +0.000006] Call Trace:
>>>>> [  +0.000005]  <TASK>
>>>>> [  +0.000005]  dump_stack_lvl+0x6c/0x90
>>>>> [  +0.000011]  print_report+0xc4/0x5e0
>>>>> [  +0.000009]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000008]  ? kasan_complete_mode_report_info+0x26/0x1d0
>>>>> [  +0.000007]  ? amdgpu_userq_fence_create+0x726/0x880 [amdgpu]
>>>>> [  +0.000405]  kasan_report+0xdf/0x120
>>>>> [  +0.000009]  ? amdgpu_userq_fence_create+0x726/0x880 [amdgpu]
>>>>> [  +0.000405]  __asan_report_store8_noabort+0x17/0x20
>>>>> [  +0.000007]  amdgpu_userq_fence_create+0x726/0x880 [amdgpu]
>>>>> [  +0.000406]  ? __pfx_amdgpu_userq_fence_create+0x10/0x10 [amdgpu]
>>>>> [  +0.000408]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000008]  ? ttm_resource_move_to_lru_tail+0x235/0x4f0 [ttm]
>>>>> [  +0.000013]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000008]  amdgpu_userq_signal_ioctl+0xd29/0x1c70 [amdgpu]
>>>>> [  +0.000412]  ? __pfx_amdgpu_userq_signal_ioctl+0x10/0x10 [amdgpu]
>>>>> [  +0.000404]  ? try_to_wake_up+0x165/0x1840
>>>>> [  +0.000010]  ? __pfx_futex_wake_mark+0x10/0x10
>>>>> [  +0.000011]  drm_ioctl_kernel+0x178/0x2f0 [drm]
>>>>> [  +0.000050]  ? __pfx_amdgpu_userq_signal_ioctl+0x10/0x10 [amdgpu]
>>>>> [  +0.000404]  ? __pfx_drm_ioctl_kernel+0x10/0x10 [drm]
>>>>> [  +0.000043]  ? __kasan_check_read+0x11/0x20
>>>>> [  +0.000007]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000007]  ? __kasan_check_write+0x14/0x20
>>>>> [  +0.000008]  drm_ioctl+0x513/0xd20 [drm]
>>>>> [  +0.000040]  ? __pfx_amdgpu_userq_signal_ioctl+0x10/0x10 [amdgpu]
>>>>> [  +0.000407]  ? __pfx_drm_ioctl+0x10/0x10 [drm]
>>>>> [  +0.000044]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000007]  ? _raw_spin_lock_irqsave+0x99/0x100
>>>>> [  +0.000007]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
>>>>> [  +0.000006]  ? __rseq_handle_notify_resume+0x188/0xc30
>>>>> [  +0.000008]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000008]  ? srso_return_thunk+0x5/0x5f
>>>>> [  +0.000006]  ? _raw_spin_unlock_irqrestore+0x27/0x50
>>>>> [  +0.000010]  amdgpu_drm_ioctl+0xcd/0x1d0 [amdgpu]
>>>>> [  +0.000388]  __x64_sys_ioctl+0x135/0x1b0
>>>>> [  +0.000009]  x64_sys_call+0x1205/0x20d0
>>>>> [  +0.000007]  do_syscall_64+0x4d/0x120
>>>>> [  +0.000008]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>>>> [  +0.000007] RIP: 0033:0x7f7c3d31a94f
>>>>>
>>>>> Signed-off-by: Arunpravin Paneer Selvam 
>>>>> <Arunpravin.PaneerSelvam at amd.com>
>>>>> ---
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 4 +++-
>>>>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> index 3a88f754a395..49dc78c2f0d7 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> @@ -229,6 +229,7 @@ int amdgpu_userq_fence_create(struct 
>>>>> amdgpu_usermode_queue *userq,
>>>>>           unsigned long index, count = 0;
>>>>>           int i = 0;
>>>>>   +        xa_lock(&userq->fence_drv_xa);
>>>>
>>>> Don't you allocate the userq->fence_drv_xa after counting the 
>>>> number of objects?
>>>>
>>>> If yes then you need to drop the lock again for that.
>>> We are allocating  memory for userq_fence->fence_drv_array using the 
>>> kvmalloc_array(),
>>> should we drop the lock for this memory allocation and again acquire 
>>> the lock for
>>> moving the fence_drv references from userq->fence_drv_xa to 
>>> userq_fence->fence_drv_array
>>> code block. Is this correct?
>>
>> Yes, that should probably do it.
> I tried to acquire lock only for the xa_for_each() blocks and not for 
> the memory allocation, but if we
> dont aquire lock for the kvmalloc_array() memory allocation part, I 
> see the kasan out of bounds
> BUG again. Can we acquire the lock for the kvmalloc_array() part as well?

Then we would need to allocate that one with GFP_ATOMIC which we should 
really try to avoid.

For now I think you should just use GFP_ATOMIC and we add a big TODO 
that we need to fix that as soon as everything is working.

Regards,
Christian.

>
> Thanks,
> Arun
>>
>> Regards,
>> Christian.
>>
>>>>
>>>>> xa_for_each(&userq->fence_drv_xa, index, stored_fence_drv)
>>>>>               count++;
>>>>>   @@ -240,12 +241,13 @@ int amdgpu_userq_fence_create(struct 
>>>>> amdgpu_usermode_queue *userq,
>>>>>           if (userq_fence->fence_drv_array) {
>>>>>               xa_for_each(&userq->fence_drv_xa, index, 
>>>>> stored_fence_drv) {
>>>>>                   userq_fence->fence_drv_array[i] = stored_fence_drv;
>>>>> -                xa_erase(&userq->fence_drv_xa, index);
>>>>> +                __xa_erase(&userq->fence_drv_xa, index);
>>>>
>>>> It's *much* more efficient to release all entries at once by 
>>>> calling xa_destroy() after the loop I think.
>>> sure, I will try with xa_destroy().
>>>
>>> Thanks,
>>> Arun.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>                   i++;
>>>>>               }
>>>>>           }
>>>>>             userq_fence->fence_drv_array_count = i;
>>>>> +        xa_unlock(&userq->fence_drv_xa);
>>>>>       } else {
>>>>>           userq_fence->fence_drv_array = NULL;
>>>>>           userq_fence->fence_drv_array_count = 0;
>>>>
>>>
>>
>



More information about the amd-gfx mailing list