[Intel-gfx] [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset
Chris Wilson
chris at chris-wilson.co.uk
Mon Sep 28 08:43:12 PDT 2015
On Mon, Sep 28, 2015 at 06:25:04PM +0300, Mika Kuoppala wrote:
>
> Hi,
>
> Chris Wilson <chris at chris-wilson.co.uk> writes:
>
> > Having flushed all requests from all queues, we know that all
> > ringbuffers must now be empty. However, since we do not reclaim
> > all space when retiring the request (to prevent HEADs colliding
> > with rapid ringbuffer wraparound) the amount of available space
> > on each ringbuffer upon reset is less than when we start. Do one
> > more pass over all the ringbuffers to reset the available space
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Arun Siluvery <arun.siluvery at linux.intel.com>
> > Cc: Mika Kuoppala <mika.kuoppala at intel.com>
> > Cc: Dave Gordon <david.s.gordon at intel.com>
> > ---
> > drivers/gpu/drm/i915/i915_gem.c | 14 ++++++++++++++
> > drivers/gpu/drm/i915/intel_lrc.c | 1 +
> > drivers/gpu/drm/i915/intel_ringbuffer.c | 13 ++++++++++---
> > drivers/gpu/drm/i915/intel_ringbuffer.h | 2 ++
> > 4 files changed, 27 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 41263cd4170c..3a42c350fec9 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2738,6 +2738,8 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
> > static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
> > struct intel_engine_cs *ring)
> > {
> > + struct intel_ringbuffer *buffer;
> > +
> > while (!list_empty(&ring->active_list)) {
> > struct drm_i915_gem_object *obj;
> >
> > @@ -2783,6 +2785,18 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
> >
> > i915_gem_request_retire(request);
> > }
> > +
> > + /* Having flushed all requests from all queues, we know that all
> > + * ringbuffers must now be empty. However, since we do not reclaim
> > + * all space when retiring the request (to prevent HEADs colliding
> > + * with rapid ringbuffer wraparound) the amount of available space
> > + * upon reset is less than when we start. Do one more pass over
> > + * all the ringbuffers to reset last_retired_head.
> > + */
> > + list_for_each_entry(buffer, &ring->buffers, link) {
> > + buffer->last_retired_head = buffer->tail;
> > + intel_ring_update_space(buffer);
> > + }
> > }
> >
>
> As right after cleaning up the rings in i915_gem_reset(),
> we have i915_gem_context_reset(). That will go through
> all contexts and their ringbuffers and set tail and head to
> zero.
>
> If we do the space adjustment in intel_lr_context_reset(),
> we can avoid adding new ring->buffers list for this purpose:
No. The point is that we want to do it in a generic manner so that we can
remove intel_lr_context_reset() (the legacy code is just a degenerate
case of execlists, look at the patches I sent last year to so how simple
the code becomes after applying that transformation).
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
More information about the Intel-gfx
mailing list