[PATCH 3/5] drm/amdgpu: GPU TLB flush API moved to amdgpu_amdkfd
Felix Kuehling
felix.kuehling at amd.com
Mon Dec 23 19:34:25 UTC 2019
On 2019-12-20 7:01 p.m., Yong Zhao wrote:
>
> On 2019-12-20 6:50 p.m., Yong Zhao wrote:
>> Inline.
>>
>> On 2019-12-20 4:35 p.m., Felix Kuehling wrote:
>>> On 2019-12-20 1:24, Alex Sierra wrote:
>>>> [Why]
>>>> TLB flush method has been deprecated using kfd2kgd interface.
>>>> This implementation is now on the amdgpu_amdkfd API.
>>>>
>>>> [How]
>>>> TLB flush functions now implemented in amdgpu_amdkfd.
>>>>
>>>> Change-Id: Ic51cccdfe6e71288d78da772b6e1b6ced72f8ef7
>>>> Signed-off-by: Alex Sierra <alex.sierra at amd.com>
>>>
>>> Looks good to me. See my comment about the TODO inline.
>>>
>>>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 32
>>>> ++++++++++++++++++++++
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 2 ++
>>>> drivers/gpu/drm/amd/amdkfd/kfd_process.c | 8 ++++--
>>>> 3 files changed, 39 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>> index d3da9dde4ee1..b7f6e70c5762 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>> @@ -634,6 +634,38 @@ bool amdgpu_amdkfd_is_kfd_vmid(struct
>>>> amdgpu_device *adev, u32 vmid)
>>>> return false;
>>>> }
>>>> +int amdgpu_amdkfd_flush_gpu_tlb_vmid(struct kgd_dev *kgd,
>>>> uint16_t vmid)
>>>> +{
>>>> + struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
>>>> + /* TODO: condition missing for FAMILY above NV */
>>>
>>> I'm not sure what's missing here. NV and above don't need any
>>> special treatment. Since SDMA is connected to GFXHUB on NV, only the
>>> GFXHUB needs to be flushed.
>>>
>>> Regards,
>>> Felix
>>>
>>>
>>>> + if (adev->family == AMDGPU_FAMILY_AI) {
>>>> + int i;
>>>> +
>>>> + for (i = 0; i < adev->num_vmhubs; i++)
>>>> + amdgpu_gmc_flush_gpu_tlb(adev, vmid, i, 0);
>>>> + } else {
>>>> + amdgpu_gmc_flush_gpu_tlb(adev, vmid, AMDGPU_GFXHUB_0, 0);
>>>> + }
>>
>> This if else can be unified by
>>
>> for (i = 0; i < adev->num_vmhubs; i++)
>>
>> amdgpu_gmc_flush_gpu_tlb(adev, vmid, i, 0);
>>
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +int amdgpu_amdkfd_flush_gpu_tlb_pasid(struct kgd_dev *kgd,
>>>> uint16_t pasid)
>>>> +{
>>>> + struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
>>>> + uint32_t flush_type = 0;
>>>> + bool all_hub = false;
>>>> +
>>>> + if (adev->gmc.xgmi.num_physical_nodes &&
>>>> + adev->asic_type == CHIP_VEGA20)
>>>> + flush_type = 2;
>>>> +
>>>> + if (adev->family == AMDGPU_FAMILY_AI)
>>>> + all_hub = true;
>>>> +
>>>> + return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type,
>>>> all_hub);
> The all_hub parameter can be inferred from num_vmhubs in
> flush_gpu_tlb_pasid(), so it can be optimized out here.
Hi Yong,
This is incorrect. NV has two VM hubs: GFXHUB and MMHUB. But KFD doesn't
care about MMHUB on Navi because SDMA is connected to the GFXHUB.
Therefore the all_hub parameter should not be based on the num_vmhubs.
We need a special case for NV.
Or rather the special case could be AI, where SDMA is not connected to
GFXHUB. So only on AI we need to flush all hubs for KFD VMs.
Regards,
Felix
>>>> +}
>>>> +
>>>> bool amdgpu_amdkfd_have_atomics_support(struct kgd_dev *kgd)
>>>> {
>>>> struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
>>>> index 069d5d230810..47b0f2957d1f 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
>>>> @@ -136,6 +136,8 @@ int amdgpu_amdkfd_submit_ib(struct kgd_dev
>>>> *kgd, enum kgd_engine_type engine,
>>>> uint32_t *ib_cmd, uint32_t ib_len);
>>>> void amdgpu_amdkfd_set_compute_idle(struct kgd_dev *kgd, bool idle);
>>>> bool amdgpu_amdkfd_have_atomics_support(struct kgd_dev *kgd);
>>>> +int amdgpu_amdkfd_flush_gpu_tlb_vmid(struct kgd_dev *kgd, uint16_t
>>>> vmid);
>>>> +int amdgpu_amdkfd_flush_gpu_tlb_pasid(struct kgd_dev *kgd,
>>>> uint16_t pasid);
>>>> bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32
>>>> vmid);
>>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
>>>> b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
>>>> index 536a153ac9a4..25b90f70aecd 100644
>>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
>>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
>>>> @@ -32,6 +32,7 @@
>>>> #include <linux/mman.h>
>>>> #include <linux/file.h>
>>>> #include "amdgpu_amdkfd.h"
>>>> +#include "amdgpu.h"
>>>> struct mm_struct;
>>>> @@ -1152,16 +1153,17 @@ int kfd_reserved_mem_mmap(struct kfd_dev
>>>> *dev, struct kfd_process *process,
>>>> void kfd_flush_tlb(struct kfd_process_device *pdd)
>>>> {
>>>> struct kfd_dev *dev = pdd->dev;
>>>> - const struct kfd2kgd_calls *f2g = dev->kfd2kgd;
>>>> if (dev->dqm->sched_policy == KFD_SCHED_POLICY_NO_HWS) {
>>>> /* Nothing to flush until a VMID is assigned, which
>>>> * only happens when the first queue is created.
>>>> */
>>>> if (pdd->qpd.vmid)
>>>> - f2g->invalidate_tlbs_vmid(dev->kgd, pdd->qpd.vmid);
>>>> + amdgpu_amdkfd_flush_gpu_tlb_vmid(dev->kgd,
>>>> + pdd->qpd.vmid);
>>>> } else {
>>>> - f2g->invalidate_tlbs(dev->kgd, pdd->process->pasid);
>>>> + amdgpu_amdkfd_flush_gpu_tlb_pasid(dev->kgd,
>>>> + pdd->process->pasid);
>>>> }
>>>> }
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx at lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cyong.zhao%40amd.com%7C3a33649d2a804998d00408d785a7776f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637124827007059728&sdata=fNTunmAJObxfgbJBlNWWXucyH9ezedLv%2BrqnMTz3Ai4%3D&reserved=0
>>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cyong.zhao%40amd.com%7C3a33649d2a804998d00408d785a7776f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637124827007059728&sdata=fNTunmAJObxfgbJBlNWWXucyH9ezedLv%2BrqnMTz3Ai4%3D&reserved=0
>>
More information about the amd-gfx
mailing list