dma-buf sg mangling

Kasireddy, Vivek vivek.kasireddy at intel.com
Tue May 14 21:29:32 UTC 2024


Hi Rob,

> 
> On Mon, May 13, 2024 at 11:27 AM Christian König
> <christian.koenig at amd.com> wrote:
> >
> > Am 10.05.24 um 18:34 schrieb Zack Rusin:
> > > Hey,
> > >
> > > so this is a bit of a silly problem but I'd still like to solve it
> > > properly. The tldr is that virtualized drivers abuse
> > > drm_driver::gem_prime_import_sg_table (at least vmwgfx and xen do,
> > > virtgpu and xen punt on it) because there doesn't seem to be a
> > > universally supported way of converting the sg_table back to a list of
> > > pages without some form of gart to do it.
> >
> > Well the whole point is that you should never touch the pages in the
> > sg_table in the first place.
> >
> > The long term plan is actually to completely remove the pages from that
> > interface.
> >
> > > drm_prime_sg_to_page_array is deprecated (for all the right reasons on
> > > actual hardware) but in our cooky virtualized world we don't have
> > > gart's so what are we supposed to do with the dma_addr_t from the
> > > imported sg_table? What makes it worse (and definitely breaks xen) is
> > > that with CONFIG_DMABUF_DEBUG the sg page_link is mangled via
> > > mangle_sg_table so drm_prime_sg_to_page_array won't even work.
> >
> > XEN and KVM were actually adjusted to not touch the struct pages any
> more.
> >
> > I'm not sure if that work is already upstream or not but I had to
> > explain it over and over again why their approach doesn't work.
> >
> > > The reason why I'm saying it's a bit of a silly problem is that afaik
> > > currently it only affects IGT testing with vgem (because the rest of
> > > external gem objects will be from the virtualized gpu itself which is
> > > different). But do you have any ideas on what we'd like to do with
> > > this long term? i.e. we have a virtualized gpus without iommu, we have
> > > sg_table with some memory and we'd like to import it. Do we just
> > > assume that the sg_table on those configs will always reference cpu
> > > accessible memory (i.e. if it's external it only comes through
> > > drm_gem_shmem_object) and just do some horrific abomination like:
> > > for (i = 0; i < bo->ttm->num_pages; ++i) {
> > >      phys_addr_t pa = dma_to_phys(vmw->drm.dev, bo->ttm-
> >dma_address[i]);
> > >      pages[i] = pfn_to_page(PHYS_PFN(pa));
> > > }
> > > or add a "i know this is cpu accessible, please demangle" flag to
> > > drm_prime_sg_to_page_array or try to have some kind of more
> permanent
> > > solution?
> >
> > Well there is no solution for that. Accessing the underlying struct page
> > through the sg_table is illegal in the first place.
> >
> > So the question is not how to access the struct page, but rather why do
> > you want to do this?
> 
> It _think_ Zack is trying to map guest paged back buffers to the host
> GPU?  Which would require sending the pfn's in some form to the host
> vmm..
> 
> virtgpu goes the other direction with mapping host page backed GEM
> buffers to guest as "vram" (although for various reasons I kinda want
> to go in the other direction)
I just want to mention that I proposed a way for virtio-gpu to import buffers
from other GPU drivers here:
https://lore.kernel.org/dri-devel/20240328083615.2662516-1-vivek.kasireddy@intel.com/

For now, this is only being used for importing scanout buffers, considering the
Mutter and Weston (additional_devices feature) use-cases.

Thanks,
Vivek

> 
> BR,
> -R
> 
> > Regards,
> > Christian.
> >
> > >
> > > z
> >


More information about the dri-devel mailing list