[Intel-gfx] [PATCH 32/39] drm/i915: Allow vma binding to occur asynchronously

Kumar Valsan, Prathap prathap.kumar.valsan at intel.com
Mon Aug 5 20:35:26 UTC 2019


On Fri, Jun 14, 2019 at 08:10:16AM +0100, Chris Wilson wrote:
> If we let pages be allocated asynchronously, we also then want to push
> the binding process into an asynchronous task. Make it so, utilising the
> recent improvements to fence error tracking and struct_mutex reduction.
> 
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> ---
[snip]
>  /**
>   * i915_vma_bind - Sets up PTEs for an VMA in it's corresponding address space.
>   * @vma: VMA to map
> @@ -300,17 +412,12 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
>  	u32 vma_flags;
>  	int ret;
>  
> +	GEM_BUG_ON(!flags);
>  	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
>  	GEM_BUG_ON(vma->size > vma->node.size);
> -
> -	if (GEM_DEBUG_WARN_ON(range_overflows(vma->node.start,
> -					      vma->node.size,
> -					      vma->vm->total)))
> -		return -ENODEV;
> -
> -	if (GEM_DEBUG_WARN_ON(!flags))
> -		return -EINVAL;
> -
> +	GEM_BUG_ON(range_overflows(vma->node.start,
> +				   vma->node.size,
> +				   vma->vm->total));
>  	bind_flags = 0;
>  	if (flags & PIN_GLOBAL)
>  		bind_flags |= I915_VMA_GLOBAL_BIND;
> @@ -325,16 +432,20 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
>  	if (bind_flags == 0)
>  		return 0;
>  
> -	GEM_BUG_ON(!vma->pages);
> +	if ((bind_flags & ~vma_flags) & I915_VMA_LOCAL_BIND)
> +		bind_flags |= I915_VMA_ALLOC_BIND;
>  
>  	trace_i915_vma_bind(vma, bind_flags);
> -	ret = vma->ops->bind_vma(vma, cache_level, bind_flags);
> +	if (bind_flags & I915_VMA_ALLOC_BIND)
> +		ret = queue_async_bind(vma, cache_level, bind_flags);
> +	else
> +		ret = __vma_bind(vma, cache_level, bind_flags);
>  	if (ret)
>  		return ret;

i915_vma_remove() expects vma has pages set. This is no longer true with
async get pages. Shouldn't the clear_pages() called iff pages are set? 

[snip]
>  static inline void i915_vma_unlock(struct i915_vma *vma)
>  {
>  	reservation_object_unlock(vma->resv);
> diff --git a/drivers/gpu/drm/i915/selftests/i915_vma.c b/drivers/gpu/drm/i915/selftests/i915_vma.c
> index 56c1cac368cc..30b831408b7b 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_vma.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_vma.c
> @@ -204,8 +204,10 @@ static int igt_vma_create(void *arg)
>  		mock_context_close(ctx);
>  	}
>  
> -	list_for_each_entry_safe(obj, on, &objects, st_link)
> +	list_for_each_entry_safe(obj, on, &objects, st_link) {
> +		i915_gem_object_wait(obj, I915_WAIT_ALL, MAX_SCHEDULE_TIMEOUT);
>  		i915_gem_object_put(obj);
> +	}
>  	return err;
>  }
>  
> -- 
> 2.20.1
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx


More information about the Intel-gfx mailing list