[Intel-gfx] [PATCH 2/4] drm/i915: Retire requests along rings
Chris Wilson
chris at chris-wilson.co.uk
Mon Apr 23 09:06:45 UTC 2018
Quoting Tvrtko Ursulin (2018-04-23 09:47:57)
>
> On 20/04/2018 14:20, Chris Wilson wrote:
> > void i915_retire_requests(struct drm_i915_private *i915)
> > {
> > - struct intel_engine_cs *engine;
> > - enum intel_engine_id id;
> > + struct intel_ring *ring, *next;
> >
> > lockdep_assert_held(&i915->drm.struct_mutex);
> >
> > if (!i915->gt.active_requests)
> > return;
> >
> > - for_each_engine(engine, i915, id)
> > - engine_retire_requests(engine);
> > + list_for_each_entry_safe(ring, next, &i915->gt.rings, link)
> > + ring_retire_requests(ring);
>
> [Continuing from previous discussion on try-bot.]
>
> I had a thought that this could be managed more efficiently in a very
> simple manner.
>
> We rename timeline->inflight_seqnos to something like live_requests and
> manage this per ctx timeline, from request_add to request_retire. At the
> same time we only have ring->ring_link be linked to
> i915->gt.(live_)rings while the ctx timeline live_requests count is
> greater than zero. In other words list_add on 0->1, list_del on 1->0.
>
> This way the retire path does not have to walk all known rings, but only
> all rings with live (unretired) reqeusts.
>
> What do you think?
I wouldn't resurrect inflight_seqnos for this, we can simply use
list_empty(ring->request_list). Slight icky feeling every time we do
anything under struct_mutex, but by this point I itch all over.
Seems like a small enough patch to try after. I think I class it as an
optimisation, so would like to get the bigger change in engine to ring
soaking first.
-Chris
More information about the Intel-gfx
mailing list