[PATCH 2/2] drm/i915/gvt: Fix inconsistent locks holding sequence

Dong, Chuanxiao chuanxiao.dong at intel.com
Tue Jun 20 10:39:56 UTC 2017



> -----Original Message-----
> From: Zhenyu Wang [mailto:zhenyuw at linux.intel.com]
> Sent: Tuesday, June 20, 2017 6:22 PM
> To: Dong, Chuanxiao <chuanxiao.dong at intel.com>
> Cc: intel-gvt-dev at lists.freedesktop.org
> Subject: Re: [PATCH 2/2] drm/i915/gvt: Fix inconsistent locks holding
> sequence
> 
> On 2017.06.19 17:04:11 +0800, Chuanxiao Dong wrote:
> > There are two kinds of locking sequence.
> >
> > One is in the thread which is started by vfio ioctl to do the iommu
> > unmapping. The locking sequence is:
> > 	down_read(&group_lock) ----> mutex_lock(&cached_lock)
> >
> > The other is in the vfio release thread which will unpin all the
> > cached pages. The lock sequence is:
> > 	mutex_lock(&cached_lock) ---> down_read(&group_lock)
> >
> > And, the cache_lock is used to protect the rb tree of the cache node
> > and doing vfio unpin doesn't require this lock. Move the vfio unpin
> > out of the cache_lock protected region.
> >
> > Fixes: f30437c5e7bf ("drm/i915/gvt: add KVMGT support")
> > Signed-off-by: Chuanxiao Dong <chuanxiao.dong at intel.com>
> > ---
> >  drivers/gpu/drm/i915/gvt/kvmgt.c | 25 ++++++++++++-------------
> >  1 file changed, 12 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > index 88a4d8f..1405c60 100644
> > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > @@ -192,13 +192,6 @@ static void gvt_cache_add(struct intel_vgpu
> *vgpu, gfn_t gfn,
> >  	kfree(new);
> >  }
> >
> > -static void __gvt_cache_remove_entry(struct intel_vgpu *vgpu,
> > -				struct gvt_dma *entry)
> > -{
> > -	rb_erase(&entry->node, &vgpu->vdev.cache);
> > -	kfree(entry);
> > -}
> > -
> >  static void intel_vgpu_unpin_work(struct work_struct *work)  {
> >  	struct intel_vgpu *vgpu = container_of(work, struct intel_vgpu, @@
> > -260,16 +253,22 @@ static void gvt_cache_destroy(struct intel_vgpu
> *vgpu)
> >  	struct device *dev = mdev_dev(vgpu->vdev.mdev);
> >  	unsigned long gfn;
> >
> > -	mutex_lock(&vgpu->vdev.cache_lock);
> > -	while ((node = rb_first(&vgpu->vdev.cache))) {
> > +	do {
> > +		mutex_lock(&vgpu->vdev.cache_lock);
> > +		node = rb_first(&vgpu->vdev.cache);
> > +		if (!node) {
> > +			mutex_unlock(&vgpu->vdev.cache_lock);
> > +			break;
> > +		}
> >  		dma = rb_entry(node, struct gvt_dma, node);
> >  		gvt_dma_unmap_iova(vgpu, dma->iova);
> > -		gfn = dma->gfn;
> > +		rb_erase(&dma->node, &vgpu->vdev.cache);
> > +		mutex_unlock(&vgpu->vdev.cache_lock);
> >
> > +		gfn = dma->gfn;
> >  		vfio_unpin_pages(dev, &gfn, 1);
> > -		__gvt_cache_remove_entry(vgpu, dma);
> > -	}
> > -	mutex_unlock(&vgpu->vdev.cache_lock);
> > +		kfree(dma);
> > +	} while (1);
> >  }
> 
> For me, I might like to change as
> 
> while ((node = rb_first(...))) {
>       mutex_lock(&cache_lock);

In this case, rb_first() won't be protected by cache_lock for getting the node.  We may still want this to be protected.

Thanks
Chuanxiao

>       dma = rb_entry(node, struct gvt_dma, node);
>       gvt_dma_unmap_iova(vgpu, dma->iova);
>       gfn = dma->gfn;
>       __gvt_cache_remove_entry(vgpu, dma);
>       mutex_unlock(&vgpu->vdev.cache_lock);
>       vfio_unpin_pages(dev, &gfn, 1);
> }
> 
> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827


More information about the intel-gvt-dev mailing list