[PATCH v2 1/2] drm/i915/gvt: Fix guest vGPU hang caused by very high dma setup overhead

Zhenyu Wang zhenyuw at linux.intel.com
Tue Feb 13 02:28:06 UTC 2018


On 2018.02.13 08:26:53 +0800, Du, Changbin wrote:
> On Mon, Feb 12, 2018 at 11:17:29AM +0800, Zhenyu Wang wrote:
> > On 2018.02.09 19:14:00 +0800, Du, Changbin wrote:
> > > On Fri, Feb 09, 2018 at 11:14:38AM +0800, Zhenyu Wang wrote:
> > > > On 2018.02.07 10:55:41 +0800, changbin.du at intel.com wrote:
> > > > > From: Changbin Du <changbin.du at intel.com>
> > > [...]  
> > > > > +static void gvt_cache_init(struct intel_vgpu *vgpu)
> > > > > +{
> > > > > +	vgpu->vdev.gfn_cache = RB_ROOT;
> > > > > +	vgpu->vdev.dma_addr_cache = RB_ROOT;
> > > > > +	mutex_init(&vgpu->vdev.cache_lock);
> > > > > +}
> > > > > +
> > > > >  static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
> > > > >  {
> > > > >  	hash_init(info->ptable);
> > > > > @@ -489,13 +490,19 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
> > > > >  
> > > > >  	if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
> > > > >  		struct vfio_iommu_type1_dma_unmap *unmap = data;
> > > > > -		unsigned long gfn, end_gfn;
> > > > > +		struct gvt_dma *entry;
> > > > > +		unsigned long size;
> > > > >  
> > > > > -		gfn = unmap->iova >> PAGE_SHIFT;
> > > > > -		end_gfn = gfn + unmap->size / PAGE_SIZE;
> > > > > +		mutex_lock(&vgpu->vdev.cache_lock);
> > > > > +		for (size = 0; size < unmap->size; size += PAGE_SIZE) {
> > > > > +			entry = __gvt_cache_find_dma_addr(vgpu, unmap->iova + size);
> > > > > +			if (!entry)
> > > > > +				continue;
> > > > 
> > > > I don't think this vfio unmap iova is related to our real hw dma address,
> > > > if no vIOMMU, it's gpa and we don't support vIOMMU now. Could you double check?
> > > > Looks not necessary for two cache tree, but split out dma interface is fine.
> > > > 
> > > vIOMMU? This is called from host vfio type1 iommu, not in guest. Actually this
> > > fixed a bug in kvmgt meanwhile.
> > > 
> > > Two caches are needed, we need to search by both gfn and dma addr.
> > >
> > yeah, one cache is for VFIO "iova" which is just gpa now, another cache for
> > physical dma address to lookup for unpin. But you use physical dma cache for
> > VFIO unmap here, which seems wrong to me.
> > 
> > And what's specific bug you mean? besides no active unmap but only at destroy time.
> >
> zhenyu,
> Thinking little more, seems we don't need to handle this notification at all. On
> the contrary, it is incorrect. KVMGT manage the dma mapping of itself, not those
> from vfio. The vfio framwork doesn't know these dma addr mapped by kvmgt.
> 
> So I'd suggest to remove this handler.
> 

We need to unpin pages that was pinned when mapping, so I think we still
require to handle this notification.

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/intel-gvt-dev/attachments/20180213/b7f2175e/attachment.sig>


More information about the intel-gvt-dev mailing list