[Intel-gfx] [PATCH v3] drm/i915: Ensure associated VMAs are inactive when contexts are destroyed

Daniel Vetter daniel at ffwll.ch
Tue Nov 17 08:39:57 PST 2015


On Tue, Nov 17, 2015 at 04:27:12PM +0000, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> 
> In the following commit:
> 
>     commit e9f24d5fb7cf3628b195b18ff3ac4e37937ceeae
>     Author: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>     Date:   Mon Oct 5 13:26:36 2015 +0100
> 
>         drm/i915: Clean up associated VMAs on context destruction
> 
> I added a WARN_ON assertion that VM's active list must be empty
> at the time of owning context is getting freed, but that turned
> out to be a wrong assumption.
> 
> Due ordering of operations in i915_gem_object_retire__read, where
> contexts are unreferenced before VMAs are moved to the inactive
> list, the described situation can in fact happen.
> 
> It feels wrong to do things in such order so this fix makes sure
> a reference to context is held until the move to inactive list
> is completed.
> 
> v2: Rather than hold a temporary context reference move the
>     request unreference to be the last operation. (Daniel Vetter)
> 
> v3: Fix use after free. (Chris Wilson)
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92638
> Cc: Michel Thierry <michel.thierry at intel.com>
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
> ---
>  drivers/gpu/drm/i915/i915_gem.c | 33 ++++++++++++++++++---------------
>  1 file changed, 18 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 98c83286ab68..094ac17a712d 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2404,29 +2404,32 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
>  	RQ_BUG_ON(!(obj->active & (1 << ring)));
>  
>  	list_del_init(&obj->ring_list[ring]);
> -	i915_gem_request_assign(&obj->last_read_req[ring], NULL);
>  
>  	if (obj->last_write_req && obj->last_write_req->ring->id == ring)
>  		i915_gem_object_retire__write(obj);
>  
>  	obj->active &= ~(1 << ring);
> -	if (obj->active)
> -		return;

	if (obj->active) {
		i915_gem_request_assign(&obj->last_read_req[ring], NULL);
		return;
	}

Would result in less churn in the code and drop the unecessary indent
level. Also comment is missing as to why we need to do things in a
specific order.
-Daniel

>  
> -	/* Bump our place on the bound list to keep it roughly in LRU order
> -	 * so that we don't steal from recently used but inactive objects
> -	 * (unless we are forced to ofc!)
> -	 */
> -	list_move_tail(&obj->global_list,
> -		       &to_i915(obj->base.dev)->mm.bound_list);
> +	if (!obj->active) {
> +		/* Bump our place on the bound list to keep it roughly in LRU order
> +		* so that we don't steal from recently used but inactive objects
> +		* (unless we are forced to ofc!)
> +		*/
> +		list_move_tail(&obj->global_list,
> +			&to_i915(obj->base.dev)->mm.bound_list);
>  
> -	list_for_each_entry(vma, &obj->vma_list, vma_link) {
> -		if (!list_empty(&vma->mm_list))
> -			list_move_tail(&vma->mm_list, &vma->vm->inactive_list);
> -	}
> +		list_for_each_entry(vma, &obj->vma_list, vma_link) {
> +			if (!list_empty(&vma->mm_list))
> +				list_move_tail(&vma->mm_list,
> +					       &vma->vm->inactive_list);
> +		}
>  
> -	i915_gem_request_assign(&obj->last_fenced_req, NULL);
> -	drm_gem_object_unreference(&obj->base);
> +		i915_gem_request_assign(&obj->last_fenced_req, NULL);
> +		i915_gem_request_assign(&obj->last_read_req[ring], NULL);
> +		drm_gem_object_unreference(&obj->base);
> +	} else {
> +		i915_gem_request_assign(&obj->last_read_req[ring], NULL);
> +	}
>  }
>  
>  static int
> -- 
> 1.9.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the Intel-gfx mailing list