[Intel-gfx] [PATCH v3] drm/i915: get a runtime PM ref for the deferred GPU reset work

Daniel Vetter daniel at ffwll.ch
Tue Apr 22 23:05:11 CEST 2014


On Tue, Apr 22, 2014 at 11:34:01PM +0300, Imre Deak wrote:
> On Tue, 2014-04-22 at 21:38 +0200, Daniel Vetter wrote:
> > On Fri, Apr 18, 2014 at 03:47:45PM +0300, Imre Deak wrote:
> > > Atm we can end up in the GPU reset deferred work in D3 state if the last
> > > runtime PM reference is dropped between detecting a hang/scheduling the
> > > work and executing the work. At least one such case I could trigger is
> > > the simulated reset via the i915_wedged debugfs entry. Fix this by
> > > disabling RPM before scheduling the work until the end of the work.
> > > 
> > > v2:
> > > - Instead of getting/putting the RPM reference in the reset work itself,
> > >   get it already before scheduling the work. By this we also prevent
> > >   going to D3 before the work gets to run, in addition to making sure
> > >   that we run the work itself in D0. (Ville, Daniel)
> > > v3:
> > > - fix inverted logic fail when putting the RPM ref on behalf of a
> > >   cancelled GPU reset work (Ville)
> > > 
> > > Signed-off-by: Imre Deak <imre.deak at intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/i915_dma.c |  8 +++++++-
> > >  drivers/gpu/drm/i915/i915_irq.c | 21 ++++++++++++++++++++-
> > >  2 files changed, 27 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
> > > index 0b38f88..f792576 100644
> > > --- a/drivers/gpu/drm/i915/i915_dma.c
> > > +++ b/drivers/gpu/drm/i915/i915_dma.c
> > > @@ -1823,7 +1823,13 @@ int i915_driver_unload(struct drm_device *dev)
> > >  
> > >  	/* Free error state after interrupts are fully disabled. */
> > >  	del_timer_sync(&dev_priv->gpu_error.hangcheck_timer);
> > > -	cancel_work_sync(&dev_priv->gpu_error.work);
> > > +	if (cancel_work_sync(&dev_priv->gpu_error.work))
> > > +		/*
> > > +		 * The following won't make any difference in the PM state,
> > > +		 * since RPM is disabled already, but do it still for
> > > +		 * consistency.
> > > +		 */
> > > +		intel_runtime_pm_put(dev_priv);
> > >  	i915_destroy_error_state(dev);
> > >  
> > >  	if (dev->pdev->msi_enabled)
> > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> > > index a651d0d..5e079d8 100644
> > > --- a/drivers/gpu/drm/i915/i915_irq.c
> > > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > > @@ -2210,6 +2210,9 @@ static void i915_error_work_func(struct work_struct *work)
> > >  		 */
> > >  		i915_error_wake_up(dev_priv, true);
> > >  	}
> > > +
> > > +	/* Drop the ref we took when scheduling this work. */
> > > +	intel_runtime_pm_put(dev_priv);
> > >  }
> > >  
> > >  static void i915_report_and_clear_eir(struct drm_device *dev)
> > > @@ -2353,8 +2356,24 @@ void i915_handle_error(struct drm_device *dev, bool wedged,
> > >  	 * state of outstanding pagelips). Hence it must not be run on our own
> > >  	 * dev-priv->wq work queue for otherwise the flush_work in the pageflip
> > >  	 * code will deadlock.
> > > +	 *
> > > +	 * It's guaranteed that here we are in D0 state, since we can only get
> > > +	 * here via one of the following paths:
> > > +	 * - From an IRQ handler's error detection. -> The driver must make
> > > +	 *   sure that IRQs are unmasked only while holding an RPM ref.
> > > +	 * - From hang-check due to a blocked request. -> The request holds an
> > > +	 *   RPM ref, that's only released in i915_gpu_idle() which in turn
> > > +	 *   won't be called until the request is finished.
> > > +	 * - From hang-check due to a flip hang. -> We have an RPM ref because
> > > +	 *   of the active modeset.
> > > +	 * - From debugfs i915_wedged_set(). -> The caller takes an explicit
> > > +	 *   RPM ref.
> > > +	 * Take here an atomic RPM ref still to make sure that we don't
> > > +	 * re-enter D3 until the error work gets to run and completes the
> > > +	 * reset.
> > >  	 */
> > > -	schedule_work(&dev_priv->gpu_error.work);
> > > +	if (schedule_work(&dev_priv->gpu_error.work))
> > > +		intel_runtime_pm_get_noresume(dev_priv);
> > 
> > Isn't this a bit racy? A 2nd gpu could immediately execute the work while
> > this one here gets interrupted by NMI ...
> 
> It is, I didn't think about that case..
> 
> > I think we need to grab the ref earlier before the schedule_work. Or we
> > just grab it in the reset work around the register access. Actually I
> > think this is the right fix since we can only hit this cause by
> > fake-injecting a hang through debugfs. For real hangs we'll be holding a
> > runtime pm ref until the last request is completed by the gpu, which won't
> > happen if the gpu is hung.
> > 
> > Doing the get/put in the worker only also avoids the need to do a get from
> > atomic context, which is fairly fragile.
> 
> It's only an atomic inc.. But in case we get here through the interrupt
> error detection path, there is no guarantee we have any pending GPU
> work, so we could drop the last reference before the work is run.

The interrupt based error handling is only wired up for pre-gen5. So not
really a concern for runtime pm. I still think that it would be better to
just grab a runtime pm ref around the reset function itself and don't do
crazy dances in the interrupt handler. Even if the underlying
implementation is a simple atomic inc ...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch



More information about the Intel-gfx mailing list