[PATCH 29/35] drm/amdgpu: svm bo enable_signal call condition

Felix Kuehling felix.kuehling at amd.com
Thu Jan 7 16:53:25 UTC 2021


Am 2021-01-07 um 11:28 a.m. schrieb Christian König:
> Am 07.01.21 um 17:16 schrieb Felix Kuehling:
>> Am 2021-01-07 um 5:56 a.m. schrieb Christian König:
>>
>>> Am 07.01.21 um 04:01 schrieb Felix Kuehling:
>>>> From: Alex Sierra <alex.sierra at amd.com>
>>>>
>>>> [why]
>>>> To support svm bo eviction mechanism.
>>>>
>>>> [how]
>>>> If the BO crated has AMDGPU_AMDKFD_CREATE_SVM_BO flag set,
>>>> enable_signal callback will be called inside amdgpu_evict_flags.
>>>> This also causes gutting of the BO by removing all placements,
>>>> so that TTM won't actually do an eviction. Instead it will discard
>>>> the memory held by the BO. This is needed for HMM migration to user
>>>> mode system memory pages.
>>> I don't think that this will work. What exactly are you doing here?
>> We discussed this a while ago when we talked about pipelined gutting.
>> And you actually helped us out with a fix for that
>> (https://patchwork.freedesktop.org/patch/379039/).
>
> That's not what I meant. The pipelined gutting is ok, but why the
> enable_signaling()?

That's what triggers our eviction fence callback
amdkfd_fence_enable_signaling that schedules the worker doing the
eviction. Without pipelined gutting we'd be getting that callback from
the GPU scheduler when it tries to execute the job that does the
migration. With pipelined gutting we have to call this somewhere ourselves.

I guess we could schedule the eviction worker directly without going
through the fence callback. I think we did it this way because it's more
similar to our KFD BO eviction handling where the worker gets scheduled
by the fence callback.

Regards,
  Felix


>
> Christian.
>
>>
>> SVM BOs are BOs in VRAM containing data for HMM ranges. When such a BO
>> is evicted by TTM, we do an HMM migration of the data to system memory
>> (triggered by kgd2kfd_schedule_evict_and_restore_process in patch 30).
>> That means we don't need TTM to copy the BO contents to GTT any more.
>> Instead we want to use pipelined gutting to allow the VRAM to be freed
>> once the fence signals that the HMM migration is done (the
>> dma_fence_signal call near the end of svm_range_evict_svm_bo_worker in
>> patch 28).
>>
>> Regards,
>>    Felix
>>
>>
>>> As Daniel pointed out HMM and dma_fences are fundamentally
>>> incompatible.
>>>
>>> Christian.
>>>
>>>> Signed-off-by: Alex Sierra <alex.sierra at amd.com>
>>>> Signed-off-by: Felix Kuehling <Felix.Kuehling at amd.com>
>>>> ---
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 14 ++++++++++++++
>>>>    1 file changed, 14 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> index f423f42cb9b5..62d4da95d22d 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> @@ -107,6 +107,20 @@ static void amdgpu_evict_flags(struct
>>>> ttm_buffer_object *bo,
>>>>        }
>>>>          abo = ttm_to_amdgpu_bo(bo);
>>>> +    if (abo->flags & AMDGPU_AMDKFD_CREATE_SVM_BO) {
>>>> +        struct dma_fence *fence;
>>>> +        struct dma_resv *resv = &bo->base._resv;
>>>> +
>>>> +        rcu_read_lock();
>>>> +        fence = rcu_dereference(resv->fence_excl);
>>>> +        if (fence && !fence->ops->signaled)
>>>> +            dma_fence_enable_sw_signaling(fence);
>>>> +
>>>> +        placement->num_placement = 0;
>>>> +        placement->num_busy_placement = 0;
>>>> +        rcu_read_unlock();
>>>> +        return;
>>>> +    }
>>>>        switch (bo->mem.mem_type) {
>>>>        case AMDGPU_PL_GDS:
>>>>        case AMDGPU_PL_GWS:
>


More information about the dri-devel mailing list