[Intel-gfx] [PATCH 29/30] drm/i915: Track fence setup separately from fenced object lifetime

Chris Wilson chris at chris-wilson.co.uk
Wed Apr 13 23:56:57 CEST 2011


On Wed, 13 Apr 2011 22:42:23 +0200, Daniel Vetter <daniel at ffwll.ch> wrote:
> On Tue, Apr 12, 2011 at 09:31:57PM +0100, Chris Wilson wrote:
> > This fixes a bookkeeping error causing an OOPS whilst waiting for an
> > object to finish using a fence. Now we can simply wait for the fence to
> > be written independent of the objects currently inhabiting it (past,
> > present and future).
> > 
> > A large amount of the change is to delay updating the information about
> > the fence on bo until after we successfully write, or queue the write to,
> > the register. This avoids the complication of undoing a partial change
> > should we fail in pipelining the change.
> > 
> > Cc: Andy Whitcroft <apw at canonical.com>
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> 
> I think that r-b is stale ;-) Still holds though for the general idea. A
> few nitpicks below.

Meh, reviewers are fickle. I'm pretty sure I have not changed the code
from since the last time I put it in front of you. Much. ;-)

> On general comment: I think we should get completely rid of
> last_fenced_ring. There should be no way an object can change rings
> without being at least completely flushed (or even going through the
> inactive list).  Maybe that's for a separate patch but I'm slightly uneasy
> with the fact that we don't seem to systematically clear last_fenced_ring
> _anywhere_.

Ah. That was to make sure you were paying attention. last_fenced_seqno was
the guard.

last_fenced_ring is the complexity that holds it all together sadly. Every
time I try to eliminate it, I keep coming back to it as the cleanest
solution.

> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index ca14a86..1949048 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -1731,6 +1731,8 @@ i915_gem_object_move_to_inactive(struct drm_i915_gem_object *obj)
> >  	i915_gem_object_move_off_active(obj);
> >  	obj->fenced_gpu_access = false;
> >  
> > +	obj->last_fenced_seqno = 0;
> > +
> 
> I think we could move that to move_off_active where last_rendering_seqno
> is being reset. Would be slightly more consistent. Resetting
> last_fenced_ring together with last_fenced_seqno probably makes sens, too.

Right, the choice of setting last_fenced_seqno to 0 in move_off_active() or
move_to_inactive() doesn't impact upon flush_fence.

> > @@ -2675,47 +2661,43 @@ i915_gem_object_get_fence(struct drm_i915_gem_object *obj,
> >  	if (reg == NULL)
> >  		return -ENOSPC;
> >  
> > -	ret = i915_gem_object_flush_fence(obj, pipelined);
> > -	if (ret)
> > -		return ret;
> > -
> > -	if (reg->obj) {
> > -		struct drm_i915_gem_object *old = reg->obj;
> > -
> > +	if ((old = reg->obj)) {
> 
> Argh. Can you move the assignment out?

Must remember to use this trick of point in eyesores to distract from the
rest of the code!

> > @@ -2732,7 +2714,31 @@ update:
> >  		ret = i830_write_fence_reg(obj, pipelined, regnum);
> >  		break;
> >  	}
> > +	if (ret)
> > +		goto err;
> > +
> > +	if (pipelined) {
> > +		reg->setup_seqno = i915_gem_next_request_seqno(pipelined);
> > +		reg->setup_ring = pipelined;
> > +		if (old) {
> > +			old->last_fenced_ring = pipelined;
> > +			old->last_fenced_seqno = reg->setup_seqno;
> > +		}
> 
> This looks superfluous. flush_fence should take care of this either
> directly or via flush_ring -> process_flushing_list -> move_to_active.
> If it's just paranoia, can this be converted to a WARN_ON? Or is this
> closing a gap I'm not seeing?

Oh, this is absolutely vital. Too tired, and this is definitely one that
has to be explained whilst fresh.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre



More information about the Intel-gfx mailing list