[PATCH] drm/i915/gvt: Pin vgpu dma address before using
Zhenyu Wang
zhenyuw at linux.intel.com
Mon May 27 05:39:01 UTC 2019
On 2019.05.27 05:21:41 +0000, Zhang, Tina wrote:
>
>
> > -----Original Message-----
> > From: Zhenyu Wang [mailto:zhenyuw at linux.intel.com]
> > Sent: Friday, May 24, 2019 2:57 PM
> > To: Zhang, Tina <tina.zhang at intel.com>
> > Cc: intel-gvt-dev at lists.freedesktop.org; Yuan, Hang <hang.yuan at intel.com>
> > Subject: Re: [PATCH] drm/i915/gvt: Pin vgpu dma address before using
> >
> > On 2019.05.24 10:59:54 +0800, Tina Zhang wrote:
> > > Dma-buf display uses the vgpu dma address saved in the guest part GGTT
> > > table which is updated by vCPU thread. In host side, when the dma
> > > address is used by qemu ui thread, gvt-g must make sure the dma
> > > address is validated before letting it go to the HW. Invalid guest dma
> > > address will easily cause DMA fault and make GPU hang.
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang at intel.com>
> > > ---
> > > drivers/gpu/drm/i915/gvt/dmabuf.c | 59
> > ++++++++++++++++++++++++++--
> > > drivers/gpu/drm/i915/gvt/hypercall.h | 2 +
> > > drivers/gpu/drm/i915/gvt/kvmgt.c | 27 +++++++++++++
> > > drivers/gpu/drm/i915/gvt/mpt.h | 14 +++++++
> > > 4 files changed, 99 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c
> > > b/drivers/gpu/drm/i915/gvt/dmabuf.c
> > > index b6d93175bddf..9db4328e9732 100644
> > > --- a/drivers/gpu/drm/i915/gvt/dmabuf.c
> > > +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
> > > @@ -36,10 +36,28 @@
> > >
> > > #define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12))
> > >
> > > +static int vgpu_pin_dma_address(struct intel_vgpu *vgpu, unsigned long
> > size,
> > > + dma_addr_t dma_addr)
> > > +{
> > > + int ret = 0;
> > > +
> > > + if (intel_gvt_hypervisor_dma_get_guest_page(vgpu, dma_addr))
> > > + ret = -ENOMEM;
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static void vgpu_unpin_dma_address(struct intel_vgpu *vgpu,
> > > + dma_addr_t dma_addr)
> > > +{
> > > + intel_gvt_hypervisor_dma_unmap_guest_page(vgpu, dma_addr); }
> > > +
> > > static int vgpu_gem_get_pages(
> > > struct drm_i915_gem_object *obj)
> > > {
> > > struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
> > > + struct intel_vgpu *vgpu;
> > > struct sg_table *st;
> > > struct scatterlist *sg;
> > > int i, ret;
> > > @@ -50,6 +68,10 @@ static int vgpu_gem_get_pages(
> > > if (WARN_ON(!fb_info))
> > > return -ENODEV;
> > >
> > > + vgpu = fb_info->obj->vgpu;
> > > + if (WARN_ON(!vgpu))
> > > + return -ENODEV;
> > > +
> > > st = kmalloc(sizeof(*st), GFP_KERNEL);
> > > if (unlikely(!st))
> > > return -ENOMEM;
> > > @@ -62,21 +84,51 @@ static int vgpu_gem_get_pages(
> > > gtt_entries = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
> > > (fb_info->start >> PAGE_SHIFT);
> > > for_each_sg(st->sgl, sg, fb_info->size, i) {
> > > + dma_addr_t dma_addr =
> > > + GEN8_DECODE_PTE(readq(>t_entries[i]));
> > > + if (vgpu_pin_dma_address(vgpu, PAGE_SIZE, dma_addr)) {
> > > + ret = -ENOMEM;
> >
> > What we really want to track here? Is it that if guest plane flip to new buffer,
> > we need to make sure old exposed dmabuf can't be accessed for display
> > purpose?
> We want to make sure the dma address used by host is valid. Since the dma address map/unmap is handled by vCPU thread, gvt-g needs to check the dma address validation before GPU HW using it.
>
Yes, that's true for all guest ggtt mapping not just for display.
>
> >
> > This only checks if some guest ggtt range has been mapped with guest pages,
> > how could that translate to display buffer tracking? e.g how do you know
> > fb_info->start is still a valid offset then?
> Only guest OS can understand the plane start address well.
>
My question is how this should be resolved. We can detect guest display update
from handler or ggtt change, so we can manage dmabuf lifecycle without issue.
Instead of hacking like dma address checking, which you don't even know if that
ggtt offset is still valid for display, e.g guest can already use that for other
things with sane dma mapping as well, we need to manage dmabuf lifecycle gracefully
with guest display state.
> >
> > I think we need to do active tracking of guest plane change, which we don't
> > do now as for polling mode of gfx dmabuf interface, and besides plane mmio
> > reg tracking we need to track ggtt range used for display plane too, e.g if
> > guest doesn't change ggtt offset but only backing pages, we need to force
> > unpin, and let further action to get new pages, etc.
> Make sense. How about making this idea into another patch? After all, the only thing this patch can do is to validate the dma address got from guest ggtt entries.
> Thanks.
>
> BR,
> Tina
>
> >
> > > + goto out;
> > > + }
> > > sg->offset = 0;
> > > sg->length = PAGE_SIZE;
> > > - sg_dma_address(sg) =
> > > - GEN8_DECODE_PTE(readq(>t_entries[i]));
> > > sg_dma_len(sg) = PAGE_SIZE;
> > > + sg_dma_address(sg) = dma_addr;
> > > }
> > >
> > > __i915_gem_object_set_pages(obj, st, PAGE_SIZE);
> > > +out:
> > > + if (ret) {
> > > + dma_addr_t dma_addr;
> > >
> > > - return 0;
> > > + for_each_sg(st->sgl, sg, fb_info->size, i) {
> > > + dma_addr = sg_dma_address(sg);
> > > + if (dma_addr)
> > > + vgpu_unpin_dma_address(vgpu, dma_addr);
> > > + }
> > > + sg_free_table(st);
> > > + kfree(st);
> > > + }
> > > +
> > > + return ret;
> > > }
> > >
> > > static void vgpu_gem_put_pages(struct drm_i915_gem_object *obj,
> > > struct sg_table *pages)
> > > {
> > > + struct scatterlist *sg;
> > > +
> > > + if (obj->base.dma_buf) {
> > > + struct intel_vgpu_fb_info *fb_info = obj->gvt_info;
> > > + struct intel_vgpu_dmabuf_obj *obj = fb_info->obj;
> > > + struct intel_vgpu *vgpu = obj->vgpu;
> > > + int i;
> > > +
> > > + for_each_sg(pages->sgl, sg, fb_info->size, i)
> > > + vgpu_unpin_dma_address(vgpu,
> > > + sg_dma_address(sg));
> > > + }
> > > +
> > > sg_free_table(pages);
> > > kfree(pages);
> > > }
> > > @@ -108,6 +160,7 @@ static void dmabuf_gem_object_free(struct kref
> > *kref)
> > > kfree(obj->info);
> > > kfree(obj);
> > > }
> > > +
> > > }
> > >
> > >
> > > diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h
> > > b/drivers/gpu/drm/i915/gvt/hypercall.h
> > > index 4862fb12778e..756adcc71602 100644
> > > --- a/drivers/gpu/drm/i915/gvt/hypercall.h
> > > +++ b/drivers/gpu/drm/i915/gvt/hypercall.h
> > > @@ -62,6 +62,8 @@ struct intel_gvt_mpt {
> > > unsigned long size, dma_addr_t *dma_addr);
> > > void (*dma_unmap_guest_page)(unsigned long handle, dma_addr_t
> > > dma_addr);
> > >
> > > + int (*dma_get_guest_page)(unsigned long handle, dma_addr_t
> > > +dma_addr);
> > > +
> > > int (*map_gfn_to_mfn)(unsigned long handle, unsigned long gfn,
> > > unsigned long mfn, unsigned int nr, bool map);
> > > int (*set_trap_area)(unsigned long handle, u64 start, u64 end, diff
> > > --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > index 2911818286a5..e33f240ea9d6 100644
> > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > @@ -213,6 +213,10 @@ static void gvt_dma_unmap_page(struct
> > intel_vgpu *vgpu, unsigned long gfn,
> > > struct device *dev = &vgpu->gvt->dev_priv->drm.pdev->dev;
> > >
> > > dma_unmap_page(dev, dma_addr, size, PCI_DMA_BIDIRECTIONAL);
> > > + if (dma_mapping_error(dev, dma_addr)) {
> > > + gvt_vgpu_err("DMA unmapping failed for dma_addr
> > 0x%llx\n",
> > > + dma_addr);
> > > + }
> > > gvt_unpin_guest_page(vgpu, gfn, size); }
> > >
> > > @@ -1964,6 +1968,28 @@ static int
> > kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
> > > return ret;
> > > }
> > >
> > > +static int kvmgt_dma_get_guest_page(unsigned long handle, dma_addr_t
> > > +dma_addr) {
> > > + struct kvmgt_guest_info *info;
> > > + struct gvt_dma *entry;
> > > + int ret = 0;
> > > +
> > > + if (!handle_valid(handle))
> > > + return -ENODEV;
> > > +
> > > + info = (struct kvmgt_guest_info *)handle;
> > > +
> > > + mutex_lock(&info->vgpu->vdev.cache_lock);
> > > + entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
> > > + if (entry)
> > > + kref_get(&entry->ref);
> > > + else
> > > + ret = -ENOMEM;
> > > + mutex_unlock(&info->vgpu->vdev.cache_lock);
> > > +
> > > + return ret;
> > > +}
> > > +
> > > static void __gvt_dma_release(struct kref *ref) {
> > > struct gvt_dma *entry = container_of(ref, typeof(*entry), ref); @@
> > > -2075,6 +2101,7 @@ static struct intel_gvt_mpt kvmgt_mpt = {
> > > .gfn_to_mfn = kvmgt_gfn_to_pfn,
> > > .dma_map_guest_page = kvmgt_dma_map_guest_page,
> > > .dma_unmap_guest_page = kvmgt_dma_unmap_guest_page,
> > > + .dma_get_guest_page = kvmgt_dma_get_guest_page,
> > > .set_opregion = kvmgt_set_opregion,
> > > .set_edid = kvmgt_set_edid,
> > > .get_vfio_device = kvmgt_get_vfio_device, diff --git
> > > a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
> > > index 0f9440128123..4e9391a52098 100644
> > > --- a/drivers/gpu/drm/i915/gvt/mpt.h
> > > +++ b/drivers/gpu/drm/i915/gvt/mpt.h
> > > @@ -254,6 +254,20 @@ static inline void
> > intel_gvt_hypervisor_dma_unmap_guest_page(
> > > intel_gvt_host.mpt->dma_unmap_guest_page(vgpu->handle,
> > dma_addr); }
> > >
> > > +/**
> > > + * intel_gvt_hypervisor_dma_get_guest_page - get guest dma map addr
> > > + * @vgpu: a vGPU
> > > + * @dma_addr: guest dma addr
> > > + *
> > > + * Returns:
> > > + * 0 on success, negative error code if failed.
> > > + */
> > > +static inline int intel_gvt_hypervisor_dma_get_guest_page(
> > > + struct intel_vgpu *vgpu, dma_addr_t dma_addr) {
> > > + return intel_gvt_host.mpt->dma_get_guest_page(vgpu->handle,
> > > +dma_addr); }
> > > +
> > > /**
> > > * intel_gvt_hypervisor_map_gfn_to_mfn - map a GFN region to MFN
> > > * @vgpu: a vGPU
> > > --
> > > 2.17.1
> > >
> >
> > --
> > Open Source Technology Center, Intel ltd.
> >
> > $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827
--
Open Source Technology Center, Intel ltd.
$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/intel-gvt-dev/attachments/20190527/be10857b/attachment.sig>
More information about the intel-gvt-dev
mailing list