[Intel-gfx] [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking

Chris Wilson chris at chris-wilson.co.uk
Mon Oct 23 12:00:59 UTC 2017


Quoting Mika Kuoppala (2017-10-23 12:52:11)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> 
> > In the idle worker we drop the prolonged GT wakeref used to cover such
> > essentials as interrupt delivery. (When a CS interrupt arrives, we also
> > assert that the GT is awake.) However, it turns out that 10ms is not
> > long enough to be assured that the last CS interrupt has been delivered,
> > so bump that to 200ms, and move the entirety of that wait to before we
> > take the struct_mutex to avoid blocking. As this is now a potentially
> > long wait, restore the earlier behaviour of bailing out early when a new
> > request arrives.
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > Cc: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > Cc: Imre Deak <imre.deak at intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_gem.c | 31 ++++++++++++++++++++-----------
> >  1 file changed, 20 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 026cb52ece0b..d3a638613857 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -3281,8 +3281,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >  {
> >       struct drm_i915_private *dev_priv =
> >               container_of(work, typeof(*dev_priv), gt.idle_work.work);
> > -     struct drm_device *dev = &dev_priv->drm;
> >       bool rearm_hangcheck;
> > +     ktime_t end;
> >  
> >       if (!READ_ONCE(dev_priv->gt.awake))
> >               return;
> > @@ -3291,14 +3291,22 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >        * Wait for last execlists context complete, but bail out in case a
> >        * new request is submitted.
> >        */
> > -     wait_for(intel_engines_are_idle(dev_priv), 10);
> > -     if (READ_ONCE(dev_priv->gt.active_requests))
> > -             return;
> > +     end = ktime_add_ms(ktime_get(), 200);
> > +     do {
> > +             if (READ_ONCE(dev_priv->gt.active_requests) ||
> > +                 work_pending(work))
> > +                     return;
> > +
> > +             if (intel_engines_are_idle(dev_priv))
> > +                     break;
> > +
> > +             usleep_range(100, 500);
> > +     } while (ktime_before(ktime_get(), end));
> >  
> >       rearm_hangcheck =
> >               cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
> >  
> > -     if (!mutex_trylock(&dev->struct_mutex)) {
> > +     if (!mutex_trylock(&dev_priv->drm.struct_mutex)) {
> >               /* Currently busy, come back later */
> >               mod_delayed_work(dev_priv->wq,
> >                                &dev_priv->gt.idle_work,
> > @@ -3310,13 +3318,14 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >        * New request retired after this work handler started, extend active
> >        * period until next instance of the work.
> >        */
> > -     if (work_pending(work))
> > +     if (dev_priv->gt.active_requests || work_pending(work))
> >               goto out_unlock;
> >
> 
> In here there might be some value of introducing helper
> for gt_work_pending as you could use it in early bailout and
> in here. You would get one superfluous READ_ONCE by having that inside
> the helper but in idle work it doesnt matter.
> 
> I think it would read better too. But as it is in bikesched
> department.

Read better depends on finding the right name.

new_requests_since_last_retire()?
-Chris


More information about the Intel-gfx mailing list