[Intel-gfx] [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking
Chris Wilson
chris at chris-wilson.co.uk
Fri Oct 20 13:52:17 UTC 2017
Quoting Mika Kuoppala (2017-10-20 14:23:02)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
>
> > Quoting Mika Kuoppala (2017-10-20 14:11:53)
> >> Chris Wilson <chris at chris-wilson.co.uk> writes:
> >>
> >> > In the idle worker we drop the prolonged GT wakeref used to cover such
> >> > essentials as interrupt delivery. (When a CS interrupt arrives, we also
> >> > assert that the GT is awake.) However, it turns out that 10ms is not
> >> > long enough to be assured that the last CS interrupt has been delivered,
> >> > so bump that to 200ms, and move the entirety of that wait to before we
> >> > take the struct_mutex to avoid blocking. As this is now a potentially
> >> > long wait, restore the earlier behaviour of bailing out early when a new
> >> > request arrives.
> >> >
> >> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> >> > Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> >> > Cc: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> >> > Cc: Imre Deak <imre.deak at intel.com>
> >> > ---
> >> > drivers/gpu/drm/i915/i915_gem.c | 31 ++++++++++++++++++++-----------
> >> > 1 file changed, 20 insertions(+), 11 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> >> > index 026cb52ece0b..d3a638613857 100644
> >> > --- a/drivers/gpu/drm/i915/i915_gem.c
> >> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> >> > @@ -3281,8 +3281,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >> > {
> >> > struct drm_i915_private *dev_priv =
> >> > container_of(work, typeof(*dev_priv), gt.idle_work.work);
> >> > - struct drm_device *dev = &dev_priv->drm;
> >> > bool rearm_hangcheck;
> >> > + ktime_t end;
> >> >
> >> > if (!READ_ONCE(dev_priv->gt.awake))
> >> > return;
> >> > @@ -3291,14 +3291,22 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >> > * Wait for last execlists context complete, but bail out in case a
> >> > * new request is submitted.
> >> > */
> >> > - wait_for(intel_engines_are_idle(dev_priv), 10);
> >> > - if (READ_ONCE(dev_priv->gt.active_requests))
> >> > - return;
> >> > + end = ktime_add_ms(ktime_get(), 200);
> >> > + do {
> >> > + if (READ_ONCE(dev_priv->gt.active_requests) ||
> >> > + work_pending(work))
> >> > + return;
> >> > +
> >> > + if (intel_engines_are_idle(dev_priv))
> >> > + break;
> >> > +
> >> > + usleep_range(100, 500);
> >> > + } while (ktime_before(ktime_get(), end));
> >> >
> >>
> >> Why can't we just wait_for(intel_engines_are_idle(dev_priv, 200)) ?
> >
> > That return. We don't really want to block the ordered wq for 200ms when
> > we already know we won't make progress. (Whilst we are running nothing
> > else that wants to use dev_priv->wq can.)
>
> Ok, that makes sense but why don't we have own workqueue for the
> idleworker?
We have a wq for those lowfreq work that need struct_mutex. We don't
really need it, it just helps to have a named wq when staring at a stuck
machine. One wq per struct work_struct would seem to be overkill ;)
-Chris
More information about the Intel-gfx
mailing list