why we need to do infinite RLC_SPM register setting during VM flush

Liu, Monk Monk.Liu at amd.com
Mon Apr 20 08:32:54 UTC 2020


Christian

What we want to do is like:
Read reg value from RLC_SPM_MC_CNTL to tmp
Set bits:3:0 to VMID  to tmp
Write tmp to RLC_SPM_MC_CNTL

I didn't find any PM4 packet on GFX9/10 can achieve above goal ....


_____________________________________
Monk Liu|GPU Virtualization Team |AMD
[sig-cloud-gpu]

From: Christian König <ckoenig.leichtzumerken at gmail.com>
Sent: Monday, April 20, 2020 4:03 PM
To: Liu, Monk <Monk.Liu at amd.com>; He, Jacob <Jacob.He at amd.com>; Koenig, Christian <Christian.Koenig at amd.com>
Cc: amd-gfx at lists.freedesktop.org
Subject: Re: why we need to do infinite RLC_SPM register setting during VM flush

I would also prefer to update the SPM VMID register using PM4 packets instead of the current handling.

Regards,
Christian.

Am 20.04.20 um 09:50 schrieb Liu, Monk:
I just try to explain what I want to do here, no real patch formalized yet

_____________________________________
Monk Liu|GPU Virtualization Team |AMD
[sig-cloud-gpu]

From: He, Jacob <Jacob.He at amd.com><mailto:Jacob.He at amd.com>
Sent: Monday, April 20, 2020 3:45 PM
To: Liu, Monk <Monk.Liu at amd.com><mailto:Monk.Liu at amd.com>; Koenig, Christian <Christian.Koenig at amd.com><mailto:Christian.Koenig at amd.com>
Cc: amd-gfx at lists.freedesktop.org<mailto:amd-gfx at lists.freedesktop.org>
Subject: Re: why we need to do infinite RLC_SPM register setting during VM flush


[AMD Official Use Only - Internal Distribution Only]

Do you miss a file which adds spm_updated to vm structure?
________________________________
From: Liu, Monk <Monk.Liu at amd.com<mailto:Monk.Liu at amd.com>>
Sent: Monday, April 20, 2020 3:32 PM
To: He, Jacob <Jacob.He at amd.com<mailto:Jacob.He at amd.com>>; Koenig, Christian <Christian.Koenig at amd.com<mailto:Christian.Koenig at amd.com>>
Cc: amd-gfx at lists.freedesktop.org<mailto:amd-gfx at lists.freedesktop.org> <amd-gfx at lists.freedesktop.org<mailto:amd-gfx at lists.freedesktop.org>>
Subject: why we need to do infinite RLC_SPM register setting during VM flush


Hi Jaco & Christian



As titled , check below patch:



commit 10790a09ea584cc832353a5c2a481012e5e31a13

Author: Jacob He <jacob.he at amd.com<mailto:jacob.he at amd.com>>

Date:   Fri Feb 28 20:24:41 2020 +0800



    drm/amdgpu: Update SPM_VMID with the job's vmid when application reserves the vmid



    SPM access the video memory according to SPM_VMID. It should be updated

    with the job's vmid right before the job is scheduled. SPM_VMID is a

    global resource



    Change-Id: Id3881908960398f87e7c95026a54ff83ff826700

    Signed-off-by: Jacob He <jacob.he at amd.com<mailto:jacob.he at amd.com>>

    Reviewed-by: Christian König <christian.koenig at amd.com<mailto:christian.koenig at amd.com>>



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 6e6fc8c..ba2236a 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

@@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,

        struct dma_fence *fence = NULL;

        bool pasid_mapping_needed = false;

        unsigned patch_offset = 0;

+       bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL));

        int r;



+       if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid)

+               adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);

+

        if (amdgpu_vmid_had_gpu_reset(adev, id)) {

                gds_switch_needed = true;

                vm_flush_needed = true;



this update_spm_vmid() looks an completely overkill to me, we only need to do it once for its VM ...



in SRIOV the register reading/writing for update_spm_vmid() is now carried by KIQ thus there is too much burden on KIQ for such unnecessary jobs ...



I want to change it to only do it once per VM, like:



diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

index 6e6fc8c..ba2236a 100644

--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

@@ -1056,8 +1056,12 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,

        struct dma_fence *fence = NULL;

       bool pasid_mapping_needed = false;

        unsigned patch_offset = 0;

+       bool update_spm_vmid_needed = (job->vm && (job->vm->reserved_vmid[vmhub] != NULL));

        int r;



+       if (update_spm_vmid_needed && adev->gfx.rlc.funcs->update_spm_vmid &&  !vm->spm_updated) {

+               adev->gfx.rlc.funcs->update_spm_vmid(adev, job->vmid);

+               vm->spm_updated = true;

+       }



        if (amdgpu_vmid_had_gpu_reset(adev, id)) {

                gds_switch_needed = true;

                vm_flush_needed = true;



what do you think ?



P.S.: the best way is to let GFX ring itself to do the update_spm_vmid() instead of let CPU doing it, e.g.: we put more PM4 command in VM-FLUSH packets ....

But I prefer the simple way first like I demonstrated above

_____________________________________

Monk Liu|GPU Virtualization Team |AMD

[sig-cloud-gpu]





_______________________________________________

amd-gfx mailing list

amd-gfx at lists.freedesktop.org<mailto:amd-gfx at lists.freedesktop.org>

https://lists.freedesktop.org/mailman/listinfo/amd-gfx<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7CMonk.Liu%40amd.com%7Ccb0d0dc57ea341cb4ca508d7e5013a0c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637229665742015108&sdata=PYI3Nk8sIvdixuObit%2Bu5BkE139O3auEZixRwAbkBag%3D&reserved=0>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20200420/b9ddc217/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 12243 bytes
Desc: image001.png
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20200420/b9ddc217/attachment-0001.png>


More information about the amd-gfx mailing list