[PATCH 2/7] drm/amdgpu: rework shadow handling during PD clear
Grodzovsky, Andrey
Andrey.Grodzovsky at amd.com
Fri Feb 22 16:34:54 UTC 2019
On 2/22/19 10:51 AM, Zeng, Oak wrote:
> See one comment inline [Oak]
>
> Thanks,
> Oak
>
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> On Behalf Of Christian König
> Sent: Tuesday, February 19, 2019 8:41 AM
> To: amd-gfx at lists.freedesktop.org
> Subject: [PATCH 2/7] drm/amdgpu: rework shadow handling during PD clear
>
> This way we only deal with the real BO in here.
>
> Signed-off-by: Christian König <christian.koenig at amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 64 ++++++++++++++++----------
> 1 file changed, 39 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 12d51d96491e..3c7b98a758c9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -788,38 +788,58 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
>
> r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);
> if (r)
> - goto error;
> + return r;
>
> r = amdgpu_ttm_alloc_gart(&bo->tbo);
> if (r)
> return r;
>
> + if (bo->shadow) {
> + r = ttm_bo_validate(&bo->shadow->tbo, &bo->shadow->placement,
> + &ctx);
> + if (r)
> + return r;
> +
> + r = amdgpu_ttm_alloc_gart(&bo->shadow->tbo);
> + if (r)
> + return r;
> +
> + }
> [Oak] I don't have the context what a shadow bo is used for, but is it possible that a shadow bo can have it own shadow? If yes, here you want to do it in a do...while loop. Also although this rework is performance better as you only need one job submission, it makes the codes harder to read. The original recursive way is easier to understand.
AFAIK shadow BOs are used only for page tables BOs (system memory
backups for VRAM resident BOs) and cannot have their own shadow BOs.
Andrey
> +
> r = amdgpu_job_alloc_with_ib(adev, 64, &job);
> if (r)
> - goto error;
> + return r;
>
> - addr = amdgpu_bo_gpu_offset(bo);
> - if (ats_entries) {
> - uint64_t ats_value;
> + while (1) {
> + addr = amdgpu_bo_gpu_offset(bo);
> + if (ats_entries) {
> + uint64_t ats_value;
>
> - ats_value = AMDGPU_PTE_DEFAULT_ATC;
> - if (level != AMDGPU_VM_PTB)
> - ats_value |= AMDGPU_PDE_PTE;
> + ats_value = AMDGPU_PTE_DEFAULT_ATC;
> + if (level != AMDGPU_VM_PTB)
> + ats_value |= AMDGPU_PDE_PTE;
>
> - amdgpu_vm_set_pte_pde(adev, &job->ibs[0], addr, 0,
> - ats_entries, 0, ats_value);
> - addr += ats_entries * 8;
> - }
> + amdgpu_vm_set_pte_pde(adev, &job->ibs[0], addr, 0,
> + ats_entries, 0, ats_value);
> + addr += ats_entries * 8;
> + }
>
> - if (entries) {
> - uint64_t value = 0;
> + if (entries) {
> + uint64_t value = 0;
>
> - /* Workaround for fault priority problem on GMC9 */
> - if (level == AMDGPU_VM_PTB && adev->asic_type >= CHIP_VEGA10)
> - value = AMDGPU_PTE_EXECUTABLE;
> + /* Workaround for fault priority problem on GMC9 */
> + if (level == AMDGPU_VM_PTB &&
> + adev->asic_type >= CHIP_VEGA10)
> + value = AMDGPU_PTE_EXECUTABLE;
> +
> + amdgpu_vm_set_pte_pde(adev, &job->ibs[0], addr, 0,
> + entries, 0, value);
> + }
>
> - amdgpu_vm_set_pte_pde(adev, &job->ibs[0], addr, 0,
> - entries, 0, value);
> + if (bo->shadow)
> + bo = bo->shadow;
> + else
> + break;
> }
>
> amdgpu_ring_pad_ib(ring, &job->ibs[0]); @@ -838,16 +858,10 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
> amdgpu_bo_fence(bo, fence, true);
> dma_fence_put(fence);
>
> - if (bo->shadow)
> - return amdgpu_vm_clear_bo(adev, vm, bo->shadow,
> - level, pte_support_ats);
> -
> return 0;
>
> error_free:
> amdgpu_job_free(job);
> -
> -error:
> return r;
> }
>
> --
> 2.17.1
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list