[PATCH 06/19] drm/amdgpu: implement vm recovery function from shadow
Christian König
deathsimple at vodafone.de
Thu Aug 11 08:39:26 UTC 2016
Am 05.08.2016 um 11:38 schrieb Chunming Zhou:
> Change-Id: I46783043eecbe9fc9c2ce9230be1085aca3731bd
> Signed-off-by: Chunming Zhou <David1.Zhou at amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 4 +++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 48 ++++++++++++++++++++++++++++++++++
> 2 files changed, 52 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 53b7039..9cb6fda 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -921,6 +921,8 @@ struct amdgpu_vm {
>
> /* client id */
> u64 client_id;
> +
> + struct fence *shadow_sync_fence;
> };
>
> struct amdgpu_vm_id {
> @@ -1011,6 +1013,8 @@ int amdgpu_vm_bo_unmap(struct amdgpu_device *adev,
> uint64_t addr);
> void amdgpu_vm_bo_rmv(struct amdgpu_device *adev,
> struct amdgpu_bo_va *bo_va);
> +int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
> + struct amdgpu_vm *vm);
>
> /*
> * context related structures
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 017274c..e6576c2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -702,6 +702,52 @@ error_free:
> return r;
> }
>
> +int amdgpu_vm_recover_page_table_from_shadow(struct amdgpu_device *adev,
> + struct amdgpu_vm *vm)
> +{
> + struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> + struct fence *fence;
> + uint64_t pt_idx;
> + int r;
> +
> + if (!vm->page_directory->shadow)
> + return 0;
> +
> + r = amdgpu_bo_reserve(vm->page_directory, false);
> + if (r)
> + return r;
I think that the caller should reserve the BOs here.
Especially during CS we only want to reserve everything once.
BTW: How do we handle swapping the PD/PTs in and out?
E.g. we don't need to copy them from VRAM to GART any more.
Regards,
Christian.
> + vm->page_directory->shadow_flag = AMDGPU_SHADOW_FLAG_SYNC_TO_PARENT;
> + r = amdgpu_bo_sync_between_bo_and_shadow(adev, ring,
> + vm->page_directory,
> + NULL, &fence);
> + if (r) {
> + DRM_ERROR("recover page table failed!\n");
> + goto err;
> + }
> + fence_put(vm->shadow_sync_fence);
> + vm->shadow_sync_fence = fence_get(fence);
> + fence_put(fence);
> + for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
> + struct amdgpu_bo *bo = vm->page_tables[pt_idx].entry.robj;
> +
> + if (!bo)
> + continue;
> + bo->shadow_flag = AMDGPU_SHADOW_FLAG_SYNC_TO_PARENT;
> + r = amdgpu_bo_sync_between_bo_and_shadow(adev, ring, bo,
> + NULL, &fence);
> + if (r) {
> + DRM_ERROR("recover page table failed!\n");
> + goto err;
> + }
> + fence_put(vm->shadow_sync_fence);
> + vm->shadow_sync_fence = fence_get(fence);
> + fence_put(fence);
> + }
> +
> +err:
> + amdgpu_bo_unreserve(vm->page_directory);
> + return r;
> +}
> /**
> * amdgpu_vm_frag_ptes - add fragment information to PTEs
> *
> @@ -1556,6 +1602,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> if (r)
> goto error_free_page_directory;
> vm->last_eviction_counter = atomic64_read(&adev->num_evictions);
> + vm->shadow_sync_fence = NULL;
>
> return 0;
>
> @@ -1604,6 +1651,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>
> amdgpu_bo_unref(&vm->page_directory);
> fence_put(vm->page_directory_fence);
> + fence_put(vm->shadow_sync_fence);
> }
>
> /**
More information about the amd-gfx
mailing list