[Intel-gfx] [PATCH 1/4] drm/i915/bxt: Avoid early timeout during PLL enable
Imre Deak
imre.deak at intel.com
Tue Jun 28 11:16:13 UTC 2016
On ti, 2016-06-28 at 12:05 +0100, Tvrtko Ursulin wrote:
> On 28/06/16 11:48, Chris Wilson wrote:
> > On Tue, Jun 28, 2016 at 01:37:30PM +0300, Imre Deak wrote:
> > > Since wait_for_atomic doesn't re-check the wait-for condition after
> > > expiry of the timeout it can fail when called from non-atomic context
> > > even if the condition is set correctly before the expiry. Fix this by
> > > using the non-atomic wait_for instead.
> >
> > wait_for_atomic is indeed only safe to be called from atomic context.
> > Likewise, wait_for is only safe to called from !atomic context.
> >
> > > I noticed this via the PLL locking timing out incorrectly, with this fix
> > > I couldn't reproduce the problem.
> > >
> > > Fixes: 0351b93992aa ("drm/i915: Do not lie about atomic timeout granularity")
> >
> > The bug would be using wait_for_atomic from non-atomic context, and so
> > older.
> >
> >
> > > CC: Chris Wilson <chris at chris-wilson.co.uk>
> > > CC: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > > Signed-off-by: Imre Deak <imre.deak at intel.com>
> > > ---
> > > drivers/gpu/drm/i915/intel_dpll_mgr.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/intel_dpll_mgr.c b/drivers/gpu/drm/i915/intel_dpll_mgr.c
> > > index c0eff15..e130c3e 100644
> > > --- a/drivers/gpu/drm/i915/intel_dpll_mgr.c
> > > +++ b/drivers/gpu/drm/i915/intel_dpll_mgr.c
> > > @@ -1374,8 +1374,8 @@ static void bxt_ddi_pll_enable(struct drm_i915_private *dev_priv,
> > > I915_WRITE(BXT_PORT_PLL_ENABLE(port), temp);
> > > POSTING_READ(BXT_PORT_PLL_ENABLE(port));
> > >
> > > - if (wait_for_atomic_us((I915_READ(BXT_PORT_PLL_ENABLE(port)) &
> > > - PORT_PLL_LOCK), 200))
> > > + if (wait_for_us((I915_READ(BXT_PORT_PLL_ENABLE(port)) & PORT_PLL_LOCK),
> > > + 200))
> >
> > Does this work with CONFIG_I915_DEBUG and CONFIG_DEBUG_ATOMIC_SLEEP ?
>
> CONFIG_PREEMPT_COUNT is also required.
>
> There were a bunch of these WARNs triggering in various places. I think
> I had patches to fix them but at the same time Mika had a more
> comprehensive work in progress for the whole area. I suppose that just
> got delayed to much.
>
> AFAIR the meat of the discussion was what is more important - sleep
> granularity or timeout accuracy. I preferred the former to avoid waiting
> for too long for operations which are normally much quicker than a
> jiffie and normally succeed.
>
> Another issue if wait_for_us for sleeps < 10us is not the most efficient
> implementation. So another idea I had is to implement those via the
> wait_for_atomic but without the in_atomic WARN. And obviously now after
> Imre found this with the extra cond check as well.
For that kind of optimization, the comment at cpu_clock() could be
interesting when comparing cpu_clock(i) wrt. cpu_clock(j) and i!=j. I
couldn't see any backward jumps between such timestamps, but I'm not
sure if that comment can be disregarded. Maybe on Intel/TSC it can.
> So I think Imre's patches are good in principle, should go in, and
> probably afterwards we can talk about improving wait_for_us for timeouts
> under 10us and potentially the timeout precision as well.
>
> Regards,
>
> Tvrtko
More information about the Intel-gfx
mailing list