[PATCH] drm/ttm: replace dma_resv object on deleted BOs v3

Pan, Xinhui Xinhui.Pan at amd.com
Wed Feb 12 06:23:33 UTC 2020



> 2020年2月11日 23:43,Christian König <ckoenig.leichtzumerken at gmail.com> 写道:
> 
> When non-imported BOs are resurrected for delayed delete we replace
> the dma_resv object to allow for easy reclaiming of the resources.
> 
> v2: move that to ttm_bo_individualize_resv
> v3: add a comment to explain what's going on
> 
> Signed-off-by: Christian König <christian.koenig at amd.com>
> Reviewed-by: xinhui pan <xinhui.pan at amd.com>
> ---
> drivers/gpu/drm/ttm/ttm_bo.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index bfc42a9e4fb4..8174603d390f 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -393,6 +393,18 @@ static int ttm_bo_individualize_resv(struct ttm_buffer_object *bo)
> 
> 	r = dma_resv_copy_fences(&bo->base._resv, bo->base.resv);
> 	dma_resv_unlock(&bo->base._resv);
> +	if (r)
> +		return r;
> +
> +	if (bo->type != ttm_bo_type_sg) {
> +		/* This works because the BO is about to be destroyed and nobody
> +		 * reference it any more. The only tricky case is the trylock on
> +		 * the resv object while holding the lru_lock.
> +		 */
> +		spin_lock(&ttm_bo_glob.lru_lock);
> +		bo->base.resv = &bo->base._resv;
> +		spin_unlock(&ttm_bo_glob.lru_lock);
> +	}
> 

how about something like that.
the basic idea is to do the bo cleanup work in bo release first and avoid any race with evict.
As in bo dieing progress, evict also just do bo cleanup work.

If bo is busy, neither bo_release nor evict  can do cleanupwork  on it. For the bo release case, we just add bo back to lru list.
So we can clean it up  both in workqueue and shrinker as the past way  did.

@@ -405,8 +405,9 @@ static int ttm_bo_individualize_resv(struct ttm_buffer_object *bo)
 
    if (bo->type != ttm_bo_type_sg) {
        spin_lock(&ttm_bo_glob.lru_lock);
-       bo->base.resv = &bo->base._resv;
+       ttm_bo_del_from_lru(bo);
        spin_unlock(&ttm_bo_glob.lru_lock);
+       bo->base.resv = &bo->base._resv;
    }   
 
    return r;
@@ -606,10 +607,9 @@ static void ttm_bo_release(struct kref *kref)
         * shrinkers, now that they are queued for 
         * destruction.
         */  
-       if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT) {
+       if (bo->mem.placement & TTM_PL_FLAG_NO_EVICT)
            bo->mem.placement &= ~TTM_PL_FLAG_NO_EVICT;
-           ttm_bo_move_to_lru_tail(bo, NULL);
-       }
+       ttm_bo_add_mem_to_lru(bo, &bo->mem);
 
        kref_init(&bo->kref);
        list_add_tail(&bo->ddestroy, &bdev->ddestroy);

thanks
xinhui


> 	return r;
> }
> @@ -724,7 +736,7 @@ static bool ttm_bo_evict_swapout_allowable(struct ttm_buffer_object *bo,
> 
> 	if (bo->base.resv == ctx->resv) {
> 		dma_resv_assert_held(bo->base.resv);
> -		if (ctx->flags & TTM_OPT_FLAG_ALLOW_RES_EVICT || bo->deleted)
> +		if (ctx->flags & TTM_OPT_FLAG_ALLOW_RES_EVICT)
> 			ret = true;
> 		*locked = false;
> 		if (busy)
> -- 
> 2.17.1
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cxinhui.pan%40amd.com%7Cb184dff5aaf349e2210008d7af092637%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637170326204966375&sdata=KdZN1l%2FkDYodXxPQgaXaSXUvMz2RHxysSSF9krQRgpI%3D&reserved=0



More information about the dri-devel mailing list