[PATCH 12/12] drm/amdgpu/sriov:no shadow buffer recovery

Christian König ckoenig.leichtzumerken at gmail.com
Sun Oct 1 09:36:30 UTC 2017


Am 30.09.2017 um 08:03 schrieb Monk Liu:
> 1, we have deadlock unresloved between shadow bo recovery
> and ctx_do_release,
>
> 2, for loose mode gpu reset we always assume VRAM not lost
> so there is no need to do that from begining
>
> Change-Id: I5259f9d943239bd1fa2e45eb446ef053299fbfb1
> Signed-off-by: Monk Liu <Monk.Liu at amd.com>

NAK, even when VRAM ist lost we must restore the page tables or 
otherwise no process would be able to proceed.

Regards,
Christian.

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 29 -----------------------------
>   1 file changed, 29 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index c3f10b5..8ae7a2c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2840,9 +2840,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
>   {
>   	int i, j, r = 0;
>   	int resched;
> -	struct amdgpu_bo *bo, *tmp;
>   	struct amdgpu_ring *ring;
> -	struct dma_fence *fence = NULL, *next = NULL;
>   
>   	/* other thread is already into the gpu reset so just quit and come later */
>   	if (!atomic_add_unless(&adev->in_sriov_reset, 1, 1))
> @@ -2909,33 +2907,6 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
>   	/* release full control of GPU after ib test */
>   	amdgpu_virt_release_full_gpu(adev, true);
>   
> -	DRM_INFO("recover vram bo from shadow\n");
> -
> -	ring = adev->mman.buffer_funcs_ring;
> -	mutex_lock(&adev->shadow_list_lock);
> -	list_for_each_entry_safe(bo, tmp, &adev->shadow_list, shadow_list) {
> -		next = NULL;
> -		amdgpu_recover_vram_from_shadow(adev, ring, bo, &next);
> -		if (fence) {
> -			r = dma_fence_wait(fence, false);
> -			if (r) {
> -				WARN(r, "recovery from shadow isn't completed\n");
> -				break;
> -			}
> -		}
> -
> -		dma_fence_put(fence);
> -		fence = next;
> -	}
> -	mutex_unlock(&adev->shadow_list_lock);
> -
> -	if (fence) {
> -		r = dma_fence_wait(fence, false);
> -		if (r)
> -			WARN(r, "recovery from shadow isn't completed\n");
> -	}
> -	dma_fence_put(fence);
> -
>   	for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {
>   		ring = adev->rings[i % AMDGPU_MAX_RINGS];
>   		if (!ring || !ring->sched.thread)




More information about the amd-gfx mailing list