[Intel-gfx] [PATCH 05/59] drm/i915: Reserve ring buffer space for i915_add_request() commands

Tomas Elf tomas.elf at intel.com
Tue Mar 31 08:58:51 PDT 2015


Three comments in this one. The reason I mention it is because two are 
at the bottom of the patch and one is crammed in the middle.

On 19/03/2015 12:30, John.C.Harrison at Intel.com wrote:
> From: John Harrison <John.C.Harrison at Intel.com>
>
> It is a bad idea for i915_add_request() to fail. The work will already have been
> send to the ring and will be processed, but there will not be any tracking or
> management of that work.
>
> The only way the add request call can fail is if it can't write its epilogue
> commands to the ring (cache flushing, seqno updates, interrupt signalling). The
> reasons for that are mostly down to running out of ring buffer space and the
> problems associated with trying to get some more. This patch prevents that
> situation from happening in the first place.
>
> When a request is created, it marks sufficient space as reserved for the
> epilogue commands. Thus guaranteeing that by the time the epilogue is written,
> there will be plenty of space for it. Note that a ring_begin() call is required
> to actually reserve the space (and do any potential waiting). However, that is
> not currently done at request creation time. This is because the ring_begin()
> code can allocate a request. Hence calling begin() from the request allocation
> code would lead to infinite recursion! Later patches in this series remove the
> need for begin() to do the allocate. At that point, it becomes safe for the
> allocate to call begin() and really reserve the space.
>
> Until then, there is a potential for insufficient space to be available at the
> point of calling i915_add_request(). However, that would only be in the case
> where the request was created and immediately submitted without ever calling
> ring_begin() and adding any work to that request. Which should never happen. And
> even if it does, and if that request happens to fall down the tiny window of
> opportunity for failing due to being out of ring space then does it really
> matter because the request wasn't doing anything in the first place?
>
> For: VIZ-5115
> Signed-off-by: John Harrison <John.C.Harrison at Intel.com>
> ---
>   drivers/gpu/drm/i915/i915_drv.h         |    1 +
>   drivers/gpu/drm/i915/i915_gem.c         |   47 +++++++++++++++++++++++++++++
>   drivers/gpu/drm/i915/intel_lrc.c        |   12 ++++++++
>   drivers/gpu/drm/i915/intel_ringbuffer.c |   50 +++++++++++++++++++++++++++++--
>   drivers/gpu/drm/i915/intel_ringbuffer.h |    9 ++++++
>   5 files changed, 117 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 4ebd421..fe6446d 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -2115,6 +2115,7 @@ struct drm_i915_gem_request {
>
>   int i915_gem_request_alloc(struct intel_engine_cs *ring,
>   			   struct intel_context *ctx);
> +void i915_gem_request_cancel(struct drm_i915_gem_request *req);
>   void i915_gem_request_free(struct kref *req_ref);
>
>   static inline uint32_t
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 4824adf..fc383fc 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2344,6 +2344,13 @@ int __i915_add_request(struct intel_engine_cs *ring,
>   	} else
>   		ringbuf = ring->buffer;
>
> +	/*
> +	 * To ensure that this call will not fail, space for it's emissions

1. Nitpick: "space for it's emissions" : "it's" -> "its"

> +	 * should already have been reserved in the ring buffer. Let the ring
> +	 * know that it is time to use that space up.
> +	 */
> +	intel_ring_reserved_space_use(ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
> +
>   	request_start = intel_ring_get_tail(ringbuf);
>   	/*
>   	 * Emit any outstanding flushes - execbuf can fail to emit the flush
> @@ -2426,6 +2433,9 @@ int __i915_add_request(struct intel_engine_cs *ring,
>   			   round_jiffies_up_relative(HZ));
>   	intel_mark_busy(dev_priv->dev);
>
> +	/* Sanity check that the reserved size was large enough. */
> +	intel_ring_reserved_space_end(ringbuf);
> +
>   	return 0;
>   }
>
> @@ -2551,10 +2561,47 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
>   		return ret;
>   	}
>
> +	/*
> +	 * Reserve space in the ring buffer for all the commands required to
> +	 * eventually emit this request. This is to guarantee that the
> +	 * i915_add_request() call can't fail. Note that the reserve may need
> +	 * to be redone if the request is not actually submitted straight
> +	 * away, e.g. because a GPU scheduler has deferred it.
> +	 *
> +	 * Note further that this call merely notes the reserve request. A
> +	 * subsequent call to *_ring_begin() is required to actually ensure
> +	 * that the reservation is available. Without the begin, if the
> +	 * request creator immediately submitted the request without adding
> +	 * any commands to it then there might not actually be sufficient
> +	 * room for the submission commands. Unfortunately, the current
> +	 * *_ring_begin() implementations potentially call back here to
> +	 * i915_gem_request_alloc(). Thus calling _begin() here would lead to
> +	 * infinite recursion! Until that back call path is removed, it is
> +	 * necessary to do a manual _begin() outside.
> +	 */
> +	ret = intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
> +	if (ret) {
> +		/*
> +		 * At this point, the request is fully allocated even if not
> +		 * fully prepared. Thus it can be cleaned up using the proper
> +		 * free code.
> +		 */
> +		i915_gem_request_cancel(request);
> +		return ret;
> +	}
> +
>   	ring->outstanding_lazy_request = request;
>   	return 0;
>   }
>
> +void i915_gem_request_cancel(struct drm_i915_gem_request *req)
> +{
> +	intel_ring_reserved_space_use(req->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
> +	intel_ring_reserved_space_end(req->ringbuf);
> +
> +	i915_gem_request_unreference(req);
> +}
> +
>   struct drm_i915_gem_request *
>   i915_gem_find_active_request(struct intel_engine_cs *ring)
>   {
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 1c3834fc..bdbdcae 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -636,6 +636,9 @@ static int logical_ring_wait_request(struct intel_ringbuffer *ringbuf,
>   	struct drm_i915_gem_request *request;
>   	int ret, new_space;
>
> +	/* The whole point of reserving space is to not wait! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	if (intel_ring_space(ringbuf) >= bytes)
>   		return 0;
>
> @@ -704,6 +707,9 @@ static int logical_ring_wait_for_space(struct intel_ringbuffer *ringbuf,
>   	unsigned long end;
>   	int ret;
>
> +	/* The whole point of reserving space is to not wait! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	ret = logical_ring_wait_request(ringbuf, bytes);
>   	if (ret != -ENOSPC)
>   		return ret;
> @@ -750,6 +756,9 @@ static int logical_ring_wrap_buffer(struct intel_ringbuffer *ringbuf,
>   	uint32_t __iomem *virt;
>   	int rem = ringbuf->size - ringbuf->tail;
>
> +	/* Can't wrap if space has already been reserved! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	if (ringbuf->space < rem) {
>   		int ret = logical_ring_wait_for_space(ringbuf, ctx, rem);
>
> @@ -773,6 +782,9 @@ static int logical_ring_prepare(struct intel_ringbuffer *ringbuf,
>   {
>   	int ret;
>
> +	if (!ringbuf->reserved_in_use)
> +		bytes += ringbuf->reserved_size;
> +
>   	if (unlikely(ringbuf->tail + bytes > ringbuf->effective_size)) {
>   		ret = logical_ring_wrap_buffer(ringbuf, ctx);
>   		if (unlikely(ret))
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
> index a26bdf8..9075008 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
> @@ -2061,6 +2061,9 @@ static int intel_ring_wait_request(struct intel_engine_cs *ring, int n)
>   	struct drm_i915_gem_request *request;
>   	int ret, new_space;
>
> +	/* The whole point of reserving space is to not wait! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	if (intel_ring_space(ringbuf) >= n)
>   		return 0;
>
> @@ -2093,6 +2096,9 @@ static int ring_wait_for_space(struct intel_engine_cs *ring, int n)
>   	unsigned long end;
>   	int ret;
>
> +	/* The whole point of reserving space is to not wait! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	ret = intel_ring_wait_request(ring, n);
>   	if (ret != -ENOSPC)
>   		return ret;
> @@ -2143,6 +2149,9 @@ static int intel_wrap_ring_buffer(struct intel_engine_cs *ring)
>   	struct intel_ringbuffer *ringbuf = ring->buffer;
>   	int rem = ringbuf->size - ringbuf->tail;
>
> +	/* Can't wrap if space has already been reserved! */
> +	WARN_ON(ringbuf->reserved_in_use);
> +
>   	if (ringbuf->space < rem) {
>   		int ret = ring_wait_for_space(ring, rem);
>   		if (ret)
> @@ -2190,12 +2199,49 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
>   	return 0;
>   }
>
> -static int __intel_ring_prepare(struct intel_engine_cs *ring,
> -				int bytes)
> +int intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
> +{
> +	/* NB: Until request management is fully tidied up and the OLR is
> +	 * removed, there are too many ways for get false hits on this
> +	 * anti-recursion check! */
> +	/*WARN_ON(ringbuf->reserved_size);*/
> +	WARN_ON(ringbuf->reserved_in_use);
> +
> +	ringbuf->reserved_size = size;
> +
> +	/*
> +	 * Really need to call _begin() here but that currently leads to
> +	 * recursion problems! So just return zero and hope for the best...
> +	 */
> +	return 0;
> +}
> +
> +void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf, int size)
> +{
> +	WARN_ON(size > ringbuf->reserved_size);
> +	WARN_ON(ringbuf->reserved_in_use);
> +
> +	ringbuf->reserved_in_use = true;
> +	ringbuf->reserved_tail   = ringbuf->tail;
> +}
> +
> +void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
> +{
> +	WARN_ON(!ringbuf->reserved_in_use);
> +	WARN_ON(ringbuf->tail > ringbuf->reserved_tail + ringbuf->reserved_size);
> +
> +	ringbuf->reserved_size   = 0;
> +	ringbuf->reserved_in_use = false;
> +}
> +
> +static int __intel_ring_prepare(struct intel_engine_cs *ring, int bytes)
>   {
>   	struct intel_ringbuffer *ringbuf = ring->buffer;
>   	int ret;
>
> +	if (!ringbuf->reserved_in_use)
> +		bytes += ringbuf->reserved_size;
> +
>   	if (unlikely(ringbuf->tail + bytes > ringbuf->effective_size)) {
>   		ret = intel_wrap_ring_buffer(ring);
>   		if (unlikely(ret))
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 6566dd4..b334459 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -104,6 +104,9 @@ struct intel_ringbuffer {
>   	int space;
>   	int size;
>   	int effective_size;
> +	int reserved_size;
> +	int reserved_tail;
> +	bool reserved_in_use;
>
>   	/** We track the position of the requests in the ring buffer, and
>   	 * when each is retired we increment last_retired_head as the GPU
> @@ -442,4 +445,10 @@ intel_ring_get_request(struct intel_engine_cs *ring)
>   	return ring->outstanding_lazy_request;
>   }
>
> +#define MIN_SPACE_FOR_ADD_REQUEST	128

2. How was this value derived? How do we know that it works properly for 
all platforms? Can be be sure that it's enough to make sure that 
add_request does not fail later?

> +
> +int intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size);
> +void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf, int size);
> +void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf);
> +

3. The relationship between these functions and where they belong on the 
request submission timeline are not obvious here. These functions are 
conceptually not that difficult to grasp once you read the code (and 
read through all of the patches in this patch series) but it could be 
made much easier by adding a decent description of a) The purpose of 
each function and b) When each function should be used in relation to 
the other functions.

Thanks,
Tomas

>   #endif /* _INTEL_RINGBUFFER_H_ */
>



More information about the Intel-gfx mailing list