[PATCH 4/5] drm/panthor: Be robust against resume failures

Boris Brezillon boris.brezillon at collabora.com
Thu Nov 14 11:24:13 UTC 2024


On Thu, 14 Nov 2024 10:51:01 +0000
Steven Price <steven.price at arm.com> wrote:

> On 13/11/2024 15:42, Boris Brezillon wrote:
> > When the runtime PM resume callback returns an error, it puts the device
> > in a state where it can't be resumed anymore. Make sure we can recover
> > from such transient failures by calling pm_runtime_set_suspended()
> > explicitly after a pm_runtime_resume_and_get() failure.
> > 
> > Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>
> > ---
> >  drivers/gpu/drm/panthor/panthor_device.c |  1 +
> >  drivers/gpu/drm/panthor/panthor_device.h | 17 +++++++++++++++++
> >  drivers/gpu/drm/panthor/panthor_drv.c    |  2 +-
> >  drivers/gpu/drm/panthor/panthor_sched.c  |  4 ++--
> >  4 files changed, 21 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
> > index 353f3aabef42..d3276b936141 100644
> > --- a/drivers/gpu/drm/panthor/panthor_device.c
> > +++ b/drivers/gpu/drm/panthor/panthor_device.c
> > @@ -486,6 +486,7 @@ int panthor_device_resume(struct device *dev)
> >  
> >  err_set_suspended:
> >  	atomic_set(&ptdev->pm.state, PANTHOR_DEVICE_PM_STATE_SUSPENDED);
> > +	atomic_set(&ptdev->pm.recovery_needed, 1);
> >  	return ret;
> >  }
> >  
> > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> > index 0e68f5a70d20..cc74e99e53f9 100644
> > --- a/drivers/gpu/drm/panthor/panthor_device.h
> > +++ b/drivers/gpu/drm/panthor/panthor_device.h
> > @@ -9,6 +9,7 @@
> >  #include <linux/atomic.h>
> >  #include <linux/io-pgtable.h>
> >  #include <linux/regulator/consumer.h>
> > +#include <linux/pm_runtime.h>
> >  #include <linux/sched.h>
> >  #include <linux/spinlock.h>
> >  
> > @@ -180,6 +181,9 @@ struct panthor_device {
> >  		 * is suspended.
> >  		 */
> >  		struct page *dummy_latest_flush;
> > +
> > +		/** @recovery_needed: True when a resume attempt failed. */
> > +		atomic_t recovery_needed;
> >  	} pm;
> >  
> >  	/** @profile_mask: User-set profiling flags for job accounting. */
> > @@ -243,6 +247,19 @@ int panthor_device_mmap_io(struct panthor_device *ptdev,
> >  int panthor_device_resume(struct device *dev);
> >  int panthor_device_suspend(struct device *dev);
> >  
> > +static inline int panthor_device_resume_and_get(struct panthor_device *ptdev)
> > +{
> > +	int ret = pm_runtime_resume_and_get(ptdev->base.dev);
> > +
> > +	/* If the resume failed, we need to clear the runtime_error, which we
> > +	 * can done by forcing the RPM state to suspended.
> > +	 */
> > +	if (ret && atomic_cmpxchg(&ptdev->pm.recovery_needed, 1, 0) == 1)  
> 
> I'm unclear what this atomic achieves. At first glance it appears
> pointless: with this change if panthor_device_resume() fails then
> recovery_needed is set to 1. So logically if ret != 0 then also
> recovery_needed == 1.
> 
> My second thought was is this to avoid races? If multiple threads are
> calling this then only one will win the cmpxchg and call
> pm_runtime_set_suspended. But it's not safe - it's quite possible for
> the first thread to get past the cmpxchg and be suspended before the
> second thread comes along and reaches the same point - leading to
> multiple calls to pm_runtime_set_suspended().

Yes, it was here to avoid the race. I don't think there's a risk of
multiple threads calling pm_runtime_set_suspended() without
actually needing such a call, because we won't reach
panthor_device_resume() until the runtime_error has been cleared
(runtime PM bails out early with a -EINVAL). So, in practice, there's no
way two threads can see a recovery_needed=1 unless the error has already
been cleared by the other thread and the second thread triggered
another resume error, in which case the second
pm_runtime_set_suspended() call is legit.

But now that you mention it, it indeed doesn't prevent the second
thread to call pm_runtime_resume_and_get() before the PM runtime_error
has been cleared, leading to potential spurious errors, so that's
annoying.


More information about the dri-devel mailing list