[Intel-gfx] [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func
Daniel Vetter
daniel at ffwll.ch
Sun Sep 4 21:26:48 CEST 2011
On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote:
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 55518e3..3bc1479 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
> gen6_set_rps(dev_priv->dev, new_delay);
> dev_priv->cur_delay = new_delay;
>
> - /*
> - * rps_lock not held here because clearing is non-destructive. There is
> - * an *extremely* unlikely race with gen6_rps_enable() that is prevented
> - * by holding struct_mutex for the duration of the write.
> - */
> - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir);
> + I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir);
> mutex_unlock(&dev_priv->dev->struct_mutex);
> }
For this to work we'd need to hold the rps_lock (to avoid racing with the
irq handler). But imo my approach is conceptually simpler: The work func
grabs all oustanding PM interrupts and then enables them again in one go
(protected by rps_lock). And because the dev_priv->wq workqueue is
single-threaded (no point in using multiple threads when all work items
grab dev->struct mutex) we also cannot make a mess by running work items
in the wrong order (or in parallel).
-Daniel
--
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48
More information about the Intel-gfx
mailing list