[PATCH v2] drm: do not sleep on vblank while holding a mutex

Daniel Vetter daniel at ffwll.ch
Fri Oct 28 17:58:19 PDT 2011


On Fri, Oct 28, 2011 at 05:42:23PM -0400, Ilija Hadzic wrote:
> drm_wait_vblank must be DRM_UNLOCKED because otherwise it
> will grab the drm_global_mutex and then go to sleep until the vblank
> event it is waiting for. That can wreck havoc in the windowing system
> because if one process issues this ioctl, it will block all other
> processes for the duration of all vblanks between the current and the
> one it is waiting for. In some cases it can block the entire windowing
> system. So, make this ioctl DRM_UNLOCKED, wrap the remaining
> (very short) critical section with dev->vbl_lock spinlock, and
> add a comment to the code explaining what we are protecting with
> the lock.
> 
> v2: incorporate comments received from Daniel Vetter and
>     Michel Daenzer.
> 
> Signed-off-by: Ilija Hadzic <ihadzic at research.bell-labs.com>
> ---
>  drivers/gpu/drm/drm_drv.c |    2 +-
>  drivers/gpu/drm/drm_irq.c |   14 +++++++++++++-
>  2 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index e159f17..c990dab 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -124,7 +124,7 @@ static struct drm_ioctl_desc drm_ioctls[] = {
>  	DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_sg_alloc_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
>  	DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
>  
> -	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, 0),
> +	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
>  
>  	DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_modeset_ctl, 0),
>  
> diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
> index 3830e9e..c8b4da8 100644
> --- a/drivers/gpu/drm/drm_irq.c
> +++ b/drivers/gpu/drm/drm_irq.c
> @@ -1160,6 +1160,7 @@ int drm_wait_vblank(struct drm_device *dev, void *data,
>  	union drm_wait_vblank *vblwait = data;
>  	int ret = 0;
>  	unsigned int flags, seq, crtc, high_crtc;
> +	unsigned long irqflags;
>  
>  	if ((!drm_dev_to_irq(dev)) || (!dev->irq_enabled))
>  		return -EINVAL;
> @@ -1193,6 +1194,13 @@ int drm_wait_vblank(struct drm_device *dev, void *data,
>  	}
>  	seq = drm_vblank_count(dev, crtc);
>  
> +	/* the lock protects this section from itself executed in */
> +	/* the context of another PID, ensuring that the process that */
> +	/* wrote dev->last_vblank_wait is really the last process */
> +	/* that was here; processes waiting on different CRTCs */
> +	/* do not need to be interlocked, but rather than introducing */
> +	/* a new per-crtc lock, we reuse vbl_lock for simplicity */

Multiline comments are done with one pair of /* */, see CodingStyle

> +	spin_lock_irqsave(&dev->vbl_lock, irqflags);
>  	switch (vblwait->request.type & _DRM_VBLANK_TYPES_MASK) {
>  	case _DRM_VBLANK_RELATIVE:
>  		vblwait->request.sequence += seq;
> @@ -1200,12 +1208,15 @@ int drm_wait_vblank(struct drm_device *dev, void *data,
>  	case _DRM_VBLANK_ABSOLUTE:
>  		break;
>  	default:
> +		spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
>  		ret = -EINVAL;
>  		goto done;
>  	}
>  
> -	if (flags & _DRM_VBLANK_EVENT)
> +	if (flags & _DRM_VBLANK_EVENT) {
> +		spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
>  		return drm_queue_vblank_event(dev, crtc, vblwait, file_priv);
> +	}
>  
>  	if ((flags & _DRM_VBLANK_NEXTONMISS) &&
>  	    (seq - vblwait->request.sequence) <= (1<<23)) {
> @@ -1215,6 +1226,7 @@ int drm_wait_vblank(struct drm_device *dev, void *data,
>  	DRM_DEBUG("waiting on vblank count %d, crtc %d\n",
>  		  vblwait->request.sequence, crtc);
>  	dev->last_vblank_wait[crtc] = vblwait->request.sequence;
> +	spin_unlock_irqrestore(&dev->vbl_lock, irqflags);

Imo the only thing we need to protect here is dev->last_vblank_wait
against concurrent access from drm_vblank_info in drm_info.c

But the locking there is horribly broken: It grabs dev->struct_mutex, but
that protects exactly nothing, afaics:
- dev->vblank_refcount is an atomic
- vblank_count is protected by a the handrolled seqlock in
  drm_vblank_count
- dev->last_vblank_wait was protected by the global lock, but is now
  protected by the dev->vbl_spinlock on the write side
- dev->vblank_inmodeset is protected by the global lock for ums setups
  (locked ioctl) and by dev->mode_config.mutex for kms

And I just don't see anything else you might protect here - vblwait is
just the ioctl data copied in by the drm core ioctl helpers.

So I don't quite see what you protect here with that spinlock. Imo
- either make dev->last_vblank_wait into an atomic_t
- or just don't care about it, locking of vblank_info is already horribly
  broken.
In both cases you don't need the dev->vbl_lock spinlock.

I personally vote for no additional locking at all here and just drop the
global lock. Or maybe I'm blind and we need this. In that case, please
clue me up.

>  	DRM_WAIT_ON(ret, dev->vbl_queue[crtc], 3 * DRM_HZ,
>  		    (((drm_vblank_count(dev, crtc) -
>  		       vblwait->request.sequence) <= (1 << 23)) ||

One line below there's the curios case of us rechecking dev->irq_enabled.
Without any locking in place. But afaics this is only changed by the
driver at setup and teardown when we are guaranteed (hopefully) that
there's no open fd and hence no way to call an ioctl. So checking once at
the beginning should be good enough. Whatever.
-Daniel

-- 
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48


More information about the dri-devel mailing list