[Intel-gfx] [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Oct 28 10:18:06 PDT 2015


On 23/10/15 12:46, Mika Kuoppala wrote:
> Chris Wilson <chris at chris-wilson.co.uk> writes:
>
>> On Fri, Oct 23, 2015 at 02:07:35PM +0300, Mika Kuoppala wrote:
>>> Chris Wilson <chris at chris-wilson.co.uk> writes:
>>>
>>>> Having flushed all requests from all queues, we know that all
>>>> ringbuffers must now be empty. However, since we do not reclaim
>>>> all space when retiring the request (to prevent HEADs colliding
>>>> with rapid ringbuffer wraparound) the amount of available space
>>>> on each ringbuffer upon reset is less than when we start. Do one
>>>> more pass over all the ringbuffers to reset the available space
>>>>
>>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>>> Cc: Arun Siluvery <arun.siluvery at linux.intel.com>
>>>> Cc: Mika Kuoppala <mika.kuoppala at intel.com>
>>>> Cc: Dave Gordon <david.s.gordon at intel.com>
>>>> ---
>>>>   drivers/gpu/drm/i915/i915_gem.c         | 14 ++++++++++++++
>>>>   drivers/gpu/drm/i915/intel_lrc.c        |  1 +
>>>>   drivers/gpu/drm/i915/intel_ringbuffer.c | 13 ++++++++++---
>>>>   drivers/gpu/drm/i915/intel_ringbuffer.h |  2 ++
>>>>   4 files changed, 27 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>>>> index 41263cd4170c..3a42c350fec9 100644
>>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>>> @@ -2738,6 +2738,8 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
>>>>   static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>>>>   					struct intel_engine_cs *ring)
>>>>   {
>>>> +	struct intel_ringbuffer *buffer;
>>>> +
>>>>   	while (!list_empty(&ring->active_list)) {
>>>>   		struct drm_i915_gem_object *obj;
>>>>
>>>> @@ -2783,6 +2785,18 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>>>>
>>>>   		i915_gem_request_retire(request);
>>>>   	}
>>>> +
>>>> +	/* Having flushed all requests from all queues, we know that all
>>>> +	 * ringbuffers must now be empty. However, since we do not reclaim
>>>> +	 * all space when retiring the request (to prevent HEADs colliding
>>>> +	 * with rapid ringbuffer wraparound) the amount of available space
>>>> +	 * upon reset is less than when we start. Do one more pass over
>>>> +	 * all the ringbuffers to reset last_retired_head.
>>>> +	 */
>>>> +	list_for_each_entry(buffer, &ring->buffers, link) {
>>>> +		buffer->last_retired_head = buffer->tail;
>>>> +		intel_ring_update_space(buffer);
>>>> +	}
>>>
>>> This is all in vain as the i915_gem_context_reset() ->
>>> intel_lr_context_reset still sets head and tail to zero.
>>>
>>> So your last_retired_head will still dangle in a pre-reset
>>> world when the rest of the ringbuf items will be set to post
>>> reset world.
>>
>> It's only setting that so that we computed the full ring space as
>> available and then we set last_retired_head back to -1. So what's
>> dangling?
>> -Chris
>
> My understanding of the ringbuffer code was dandling. It is all
> clear now. We set head = tail and thus reset the ring space to full.
>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=91634
>
> should be added as this very likely fixes that one.
>
> Reviewed-by: Mika Kuoppala <mika.kuoppala at intel.com>

I've merged this one to try out dim - seems to have worked.

Regards,

Tvrtko



More information about the Intel-gfx mailing list