[Intel-gfx] [PATCH] drm/i915: add cherryview specfic forcewake in execlists_elsp_write
Daniel Vetter
daniel at ffwll.ch
Wed Sep 10 09:16:45 CEST 2014
On Tue, Sep 09, 2014 at 02:25:50PM -0700, Jesse Barnes wrote:
> On Tue, 09 Sep 2014 21:45:08 +0530
> Deepak S <deepak.s at intel.com> wrote:
>
> >
> > On Monday 08 September 2014 08:10 PM, Daniel Vetter wrote:
> > > On Mon, Sep 08, 2014 at 05:14:23PM +0300, Ville Syrjälä wrote:
> > >> On Mon, Sep 08, 2014 at 05:02:43PM +0300, Ville Syrjälä wrote:
> > >>> On Tue, Sep 09, 2014 at 07:14:16PM +0530,
> > >>> deepak.s at linux.intel.com wrote:
> > >>>> From: Deepak S <deepak.s at linux.intel.com>
> > >>>>
> > >>>> In chv, we have two power wells Render & Media. We need to use
> > >>>> corresponsing forcewake count. If we dont follow this we are
> > >>>> getting error "*ERROR*: Timed out waiting for forcewake old ack
> > >>>> to clear" due to multiple entry into __vlv_force_wake_get.
> > >>>>
> > >>>> Signed-off-by: Deepak S <deepak.s at linux.intel.com>
> > >>>> ---
> > >>>> drivers/gpu/drm/i915/intel_lrc.c | 29
> > >>>> +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+),
> > >>>> 4 deletions(-)
> > >>>>
> > >>>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c
> > >>>> b/drivers/gpu/drm/i915/intel_lrc.c index bd1b28d..bafd38b 100644
> > >>>> --- a/drivers/gpu/drm/i915/intel_lrc.c
> > >>>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > >>>> @@ -300,8 +300,18 @@ static void execlists_elsp_write(struct
> > >>>> intel_engine_cs *ring,
> > >>>> * Instead, we do the runtime_pm_get/put when
> > >>>> creating/destroying requests. */
> > >>>> spin_lock_irqsave(&dev_priv->uncore.lock, flags);
> > >>>> - if (dev_priv->uncore.forcewake_count++ == 0)
> > >>>> - dev_priv->uncore.funcs.force_wake_get(dev_priv,
> > >>>> FORCEWAKE_ALL);
> > >>>> + if (IS_CHERRYVIEW(dev_priv->dev)) {
> > >>>> + if (dev_priv->uncore.fw_rendercount++ == 0)
> > >>>> +
> > >>>> dev_priv->uncore.funcs.force_wake_get(dev_priv,
> > >>>> +
> > >>>> FORCEWAKE_RENDER);
> > >>>> + if (dev_priv->uncore.fw_mediacount++ == 0)
> > >>>> +
> > >>>> dev_priv->uncore.funcs.force_wake_get(dev_priv,
> > >>>> +
> > >>>> FORCEWAKE_MEDIA);
> > >>> This will wake both wells. Is that needed or should we just pick
> > >>> one based on the ring?
> > >> Also unlike the comment says runtime_pm_get() can't sleep since
> > >> someone must already be holding a reference, othwewise we surely
> > >> can't go writing any registers. So in theory we should be able to
> > >> call gen6_gt_force_wake_get() here, but maybe that would trigger a
> > >> might_sleep() warning. the current force wake code duplication
> > >> (esp. outside intel_uncore.c) is rather unfortunate and I'd like
> > >> to see it killed off. Maybe we just need to pull the rpm get/put
> > >> outside gen6_gt_force_wake_get()? I never really liked hiding it
> > >> there anyway.
> > > Yeah this is just broken design. And if you look at the other wheel
> > > to track outstanding gpu work (requests) then it's not needed at
> > > all.
> > >
> > > But I'm not sure what's the priority of the "rework execlists to use
> > > requests" task is and when (if ever that will happen). Jesse is the
> > > arbiter for this stuff anyway, so adding him.
> > > -Daniel
> >
> > hmm , agreed do we have a reworked execlist? The reason why added
> > this, on chv when i enable execlist, due to incorrect forcewake count
> > is causing multiple entries to forcewake_get resulting in "*ERROR*:
> > Timed out waiting for forcewake old ack to clear" "and Hang.
>
> I'm hoping we can get execlists reworked on top of the request/seqno
> stuff shortly after it lands, but I don't think that's a reason to
> block this fix, since Chris is still busy fixing up the request
> changes.
Queued for -next, thanks for the patch.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the Intel-gfx
mailing list