[Intel-gfx] [PATCH 1/6] drm: Add vblank prepare and unprepare hooks.
Daniel Vetter
daniel at ffwll.ch
Thu Aug 4 08:49:09 UTC 2016
On Wed, Aug 03, 2016 at 02:33:34PM -0700, Rodrigo Vivi wrote:
> This will allow drivers to control specific power saving
> feature and power domains when dealing with vblanks.
>
> Vblanks code are protected by spin_locks where we can't
> have anything that can sleep. While power saving features
> and power domain code have mutexes to control the states.
>
> Mutex can sleep so they cannot be used inside spin lock areas.
> So the easiest way to deal with them currently is to add these
> prepare hook for pre enabling vblanks
> and unprepare one for post disabling them.
>
> Let's introduce this optional prepare and unprepare
> hooks so drivers can deal with cases like this and any other
> case that should require sleeping codes interacting with vblanks.
>
> v2: - Rebase after a split on drm_irq.h.
> - Use refcount to minimize the prepare/unprepare calls to
> only when enabling or disabling the vblank irq. (DK's)
> - Improved doc.
> - Fix the negative unprepare.count case.
>
> Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan at intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
> ---
> drivers/gpu/drm/drm_irq.c | 37 ++++++++++++++++++++++++++++++++++++-
> include/drm/drmP.h | 43 +++++++++++++++++++++++++++++++++++++++++++
> include/drm/drm_irq.h | 20 ++++++++++++++++++++
> 3 files changed, 99 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> index 77f357b..d0d1dde 100644
> --- a/drivers/gpu/drm/drm_irq.c
> +++ b/drivers/gpu/drm/drm_irq.c
> @@ -339,6 +339,8 @@ void drm_vblank_cleanup(struct drm_device *dev)
> drm_core_check_feature(dev, DRIVER_MODESET));
>
> del_timer_sync(&vblank->disable_timer);
Needs a schedule_work here, since if you delete the timer and then unload
you'll leak all the unprepare calls. Which will upset a lot of things at
least on module unload.
> +
> + flush_work(&vblank->unprepare.work);
> }
>
> kfree(dev->vblank);
> @@ -347,6 +349,24 @@ void drm_vblank_cleanup(struct drm_device *dev)
> }
> EXPORT_SYMBOL(drm_vblank_cleanup);
>
> +static void drm_vblank_unprepare_work_fn(struct work_struct *work)
> +{
> + struct drm_vblank_crtc *vblank;
> + struct drm_device *dev;
> +
> + vblank = container_of(work, typeof(*vblank), unprepare.work);
> + dev = vblank->dev;
> +
> + if (atomic_read(&vblank->unprepare.counter) == 0)
> + return;
while (atomic_read > 0) {
/* do stuff */
atomic_dec();
}
is simpler. Plus this needs a big comment that the worker here is the
_only_ code wich ever decrements the counter. Without that this would be
racy.
> +
> + do {
> + if (dev->driver->unprepare_vblank &&
> + atomic_read(&vblank->refcount) == 0)
> + dev->driver->unprepare_vblank(dev, vblank->pipe);
> + } while (!atomic_dec_and_test(&vblank->unprepare.counter));
> +}
> +
> /**
> * drm_vblank_init - initialize vblank support
> * @dev: DRM device
> @@ -380,6 +400,9 @@ int drm_vblank_init(struct drm_device *dev, unsigned int num_crtcs)
> setup_timer(&vblank->disable_timer, vblank_disable_fn,
> (unsigned long)vblank);
> seqlock_init(&vblank->seqlock);
> +
> + INIT_WORK(&vblank->unprepare.work,
> + drm_vblank_unprepare_work_fn);
> }
>
> DRM_INFO("Supports vblank timestamp caching Rev 2 (21.10.2013).\n");
> @@ -1117,6 +1140,10 @@ static int drm_vblank_get(struct drm_device *dev, unsigned int pipe)
> if (WARN_ON(pipe >= dev->num_crtcs))
> return -EINVAL;
>
> + if (dev->driver->prepare_vblank &&
> + atomic_read(&vblank->refcount) == 0)
This is racy. If you don't want to prepare multiple times then you need
another atomic counter. Tbh that seems overkill, I'm not sure at all why
we should avoid multiple prepare/unprepare calls.
> + dev->driver->prepare_vblank(dev, pipe);
> +
> spin_lock_irqsave(&dev->vbl_lock, irqflags);
> /* Going from 0->1 means we have to enable interrupts again */
> if (atomic_add_return(1, &vblank->refcount) == 1) {
> @@ -1129,6 +1156,10 @@ static int drm_vblank_get(struct drm_device *dev, unsigned int pipe)
> }
> spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
>
> + if (dev->driver->unprepare_vblank &&
> + atomic_read(&vblank->refcount) == 0)
This is racy. You need to check whether _this_ vblank_get call failed -
after you dropped the lock someone else might have successfully completed
a vblank_get breaking your code here.
Note that atomic_t only means the counter stays consistent. It makes
_zero_ gurantees about ordering or anything else going on in the system,
and in general you need a comment explaining why you can access it and
expect some consistency. In the worker you've been lucky, here it's
broken. Please audit your usage of atomic_t carefully.
> + dev->driver->unprepare_vblank(dev, pipe);
> +
> return ret;
> }
>
> @@ -1178,6 +1209,11 @@ static void drm_vblank_put(struct drm_device *dev, unsigned int pipe)
> mod_timer(&vblank->disable_timer,
> jiffies + ((drm_vblank_offdelay * HZ)/1000));
> }
> +
> + if (atomic_read(&vblank->refcount) == 0) {
> + atomic_inc(&vblank->unprepare.counter);
> + schedule_work(&vblank->unprepare.work);
So prepare is done once, but unprepare every time we put. I really don't
think the change dk suggested was all that good ;-)
Also, this needs a check for the unprepare hook, we don't want to incure
the overhead for drivers which don't need it.
> + }
> }
>
> /**
> @@ -1603,7 +1639,6 @@ static int drm_queue_vblank_event(struct drm_device *dev, unsigned int pipe,
> }
>
> spin_unlock_irqrestore(&dev->event_lock, flags);
> -
> return 0;
>
> err_unlock:
> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> index d377865..c6ca061 100644
> --- a/include/drm/drmP.h
> +++ b/include/drm/drmP.h
> @@ -422,6 +422,49 @@ struct drm_driver {
> u32 (*get_vblank_counter) (struct drm_device *dev, unsigned int pipe);
>
> /**
> + * prepare_vblank - Optional prepare vblank hook.
> + * @dev: DRM device
> + * @pipe: counter to fetch
> + *
> + * Drivers that need to handle any kind of mutex or any other sleeping
> + * code in combination with vblanks need to implement this hook
> + * that will be called before drm_vblank_get spin_lock gets.
> + *
> + * Prepare/Unprepare was designed to be able to sleep so they are not
> + * free from concurrence. There might be few simultaneous calls.
> + * This concurrence needs to be handled on the driver side that is
> + * implementing these hooks. Probably with protected get/put counters.
"This function can be called concurrently." Says all the same things with
fewer words ;-)
But note that if you decide to implement the "prepare/unprepare only once"
logic (without bugs), then this is not true.
> + *
> + * If this is used with sleeping code you need to make sure you are
> + * not calling drm_crtc_vblank_get inside spinlock areas or with local
> + * irq disabled.
Imo this comment should be moved to drm_crtc_vblank_get under something
like this
Note:
Drivers which use the prepare_vblank hook and which can block in their
callback must ensure that they never call drm_crtc_vblank_get() from
atomic contexts. The DRM core and heleprs already make this guarantee.
Also I think this should reference the sanitize_counter function as an
example.
> + *
> + */
> + void (*prepare_vblank) (struct drm_device *dev, unsigned int pipe);
> +
> + /**
> + * unprepare_vblank - Optional unprepare vblank hook.
> + * @dev: DRM device
> + * @pipe: counter to fetch
> + *
> + * Drivers that need to handle any kind of mutex or any other sleeping
> + * code in combination with vblanks need to implement this hook
> + * that will be called in a work queue to be executed after spin lock
> + * areas of drm_vblank_put.
> + *
> + * Prepare/Unprepare was designed to be able to sleep so they are not
> + * free from concurrence. There might be few simultaneous calls.
> + * This concurrence needs to be handled on the driver side that is
> + * implementing these hooks. Probably with protected get/put counters.
This is not true, you only have one worker, there's no concurrency.
> + *
> + * Unprepare is called from a workqueue because asynchronous
asynchronous_ly_
> + * drm_crtc_vblank_put will be in spinlock areas. So it is safe to
s/will/can/ s/spinlock areas/atomic context/.
Also please add () at the end of every function you reference, to enable
the kerneldoc cross-linking. Please also check what make htmldocs
generates to make sure it's all good.
> + * keep drm_crtc_vblank_put in areas that cannot spleep.
> + */
> + void (*unprepare_vblank) (struct drm_device *dev, unsigned int pipe);
> +
> +
> + /**
> * enable_vblank - enable vblank interrupt events
> * @dev: DRM device
> * @pipe: which irq to enable
> diff --git a/include/drm/drm_irq.h b/include/drm/drm_irq.h
> index 2401b14..93a9e9d 100644
> --- a/include/drm/drm_irq.h
> +++ b/include/drm/drm_irq.h
> @@ -45,6 +45,20 @@ struct drm_pending_vblank_event {
> };
>
> /**
> + * struct drm_vblank_unprepare - unprepare vblank
> + */
> +struct drm_vblank_unprepare {
> + /**
> + * @work: Delayed work to handle unprepare out of spinlock areas
> + */
> + struct work_struct work;
> + /**
> + * @counter: Number of vblanks handled
> + */
> + atomic_t counter;
> +};
> +
> +/**
> * struct drm_vblank_crtc - vblank tracking for a CRTC
> *
> * This structure tracks the vblank state for one CRTC.
> @@ -128,6 +142,12 @@ struct drm_vblank_crtc {
> * disabling functions multiple times.
> */
> bool enabled;
> + /**
> + * @unprepare: Tracks the amount of drm_vblank_put handled that needs
> + * a delayed unprepare call so the unprepare can run out of protected
> + * areas where code can sleep.
> + */
> + struct drm_vblank_unprepare unprepare;
Not sure why the substruct, I'd just call the unprepare_work and
unprepare_counter and less kernel-doc noise. We do substrucst a lot in the
i915 device sturct because it's so huge, and to be able to split up things
a bit. But in general substruct means you need to jump around more, and it
makes it harder to find definitions using tools like cscope or grep or
similar.
Cheers, Daniel
> };
>
> extern int drm_irq_install(struct drm_device *dev, int irq);
> --
> 2.4.3
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the Intel-gfx
mailing list