[PATCH 11/11] drm/msm: Implement preemption for A5XX targets
Jordan Crouse
jcrouse at codeaurora.org
Wed Feb 8 23:00:36 UTC 2017
On Wed, Feb 08, 2017 at 12:30:08PM -0800, Stephen Boyd wrote:
> On 02/06/2017 09:39 AM, Jordan Crouse wrote:
> > diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
> > new file mode 100644
> > index 0000000..348ead7
> > --- /dev/null
> > +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
> > @@ -0,0 +1,367 @@
> >
> > +/*
> > + * Try to transition the preemption state from old to new. Return
> > + * true on success or false if the original state wasn't 'old'
> > + */
> > +static inline bool try_preempt_state(struct a5xx_gpu *a5xx_gpu,
> > + enum preempt_state old, enum preempt_state new)
> > +{
> > + enum preempt_state cur = atomic_cmpxchg(&a5xx_gpu->preempt_state,
> > + old, new);
> > +
> > + return (cur == old);
> > +}
> > +
> > +/*
> > + * Force the preemption state to the specified state. This is used in cases
> > + * where the current state is known and won't change
> > + */
> > +static inline void set_preempt_state(struct a5xx_gpu *gpu,
> > + enum preempt_state new)
> > +{
> > + /* atomic_set() doesn't automatically do barriers, so one before.. */
> > + smp_wmb();
> > + atomic_set(&gpu->preempt_state, new);
> > + /* ... and one after*/
> > + smp_wmb();
>
> Should these smp_wmb()s be replaced with smp_mb__before_atomic() and
> smp_mb__after_atomic()? The latter is stronger (mb() instead of wmb())
> but otherwise that's typically what is used with atomics. Also, it would
> be nice if the comments described what we're synchronizing with.
Yep - _before/_after atomic are more correct to use here. We are synchronizing
with other cores that are trying to check the state machine in try_preempt_state
and in the interrupt handler. I'll clarify that.
>
> > +
> > +static void a5xx_preempt_worker(struct work_struct *work)
> > +{
> > + struct a5xx_gpu *a5xx_gpu =
> > + container_of(work, struct a5xx_gpu, preempt_work);
> > + struct msm_gpu *gpu = &a5xx_gpu->base.base;
> > + struct drm_device *dev = gpu->dev;
> > + struct msm_drm_private *priv = dev->dev_private;
> > +
> > + if (atomic_read(&a5xx_gpu->preempt_state) == PREEMPT_COMPLETE) {
> > + uint32_t status = gpu_read(gpu,
> > + REG_A5XX_CP_CONTEXT_SWITCH_CNTL);
>
> Why does this worker check the status again? The irq may fire but the
> hardware is still indicating "pending"? And why do we fork off this code
> to a workqueue? Can't we do this stuff in the irq handler or timer
> handler and drop this worker?
That is a great point - I don't actually know if we need the helper here - I
don't believe we need a mutex for anything at this point - the hang recovery
will need one, but that gets scheduled anyway so the worker is unneeded.
> > +
> > + if (status == 0) {
> > + del_timer(&a5xx_gpu->preempt_timer);
> > + a5xx_gpu->cur_ring = a5xx_gpu->next_ring;
> > + a5xx_gpu->next_ring = NULL;
> > +
> > + update_wptr(gpu, a5xx_gpu->cur_ring);
> > +
> > + set_preempt_state(a5xx_gpu, PREEMPT_NONE);
> > + return;
> > + }
> > +
> > + dev_err(dev->dev, "%s: Preemption failed to complete\n",
> > + gpu->name);
> > + } else if (atomic_read(&a5xx_gpu->preempt_state) == PREEMPT_FAULTED)
> > + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
> > + else
> > + return;
> > +
> > + /* Trigger recovery */
> > + queue_work(priv->wq, &gpu->recover_work);
> > +}
> > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > index f5118ad..4fcccd4 100644
> > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > @@ -334,6 +334,11 @@ static inline void adreno_gpu_write64(struct adreno_gpu *gpu,
> > adreno_gpu_write(gpu, hi, upper_32_bits(data));
> > }
> >
> > +static inline uint32_t get_wptr(struct msm_ringbuffer *ring)
>
> const struct msm_ringbuffer?
msm_ringbuffer is actively being modified, so we would have to cast it - can't
tell if there would be a compiler advantage or not.
Jordan
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
More information about the dri-devel
mailing list