[Linaro-mm-sig] [PATCH 1/3] dma-buf: Add ioctl to query mmap info
Rob Clark
robdclark at gmail.com
Mon Aug 8 13:26:16 UTC 2022
On Mon, Aug 8, 2022 at 4:22 AM Christian König <christian.koenig at amd.com> wrote:
>
> Am 07.08.22 um 21:10 schrieb Rob Clark:
> > On Sun, Aug 7, 2022 at 11:05 AM Christian König
> > <ckoenig.leichtzumerken at gmail.com> wrote:
> >> Am 07.08.22 um 19:56 schrieb Rob Clark:
> >>> On Sun, Aug 7, 2022 at 10:38 AM Christian König
> >>> <ckoenig.leichtzumerken at gmail.com> wrote:
> >>>> [SNIP]
> >>>> And exactly that was declared completely illegal the last time it came
> >>>> up on the mailing list.
> >>>>
> >>>> Daniel implemented a whole bunch of patches into the DMA-buf layer to
> >>>> make it impossible for KVM to do this.
> >>> This issue isn't really with KVM, it is not making any CPU mappings
> >>> itself. KVM is just making the pages available to the guest.
> >> Well I can only repeat myself: This is strictly illegal.
> >>
> >> Please try this approach with CONFIG_DMABUF_DEBUG set. I'm pretty sure
> >> you will immediately run into a crash.
> >>
> >> See this here as well
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.19%2Fsource%2Fdrivers%2Fdma-buf%2Fdma-buf.c%23L653&data=05%7C01%7Cchristian.koenig%40amd.com%7Cc1392f76994f4fef7c7f08da78a86283%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637954961892996770%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T3g9ICZizCWXkIn5vEnhFYs38Uj37jCwHjMb1s3UtOw%3D&reserved=0
> >>
> >> Daniel intentionally added code to mangle the page pointers to make it
> >> impossible for KVM to do this.
> > I don't believe KVM is using the sg table, so this isn't going to stop
> > anything ;-)
>
> Then I have no idea how KVM actually works. Can you please briefly
> describe that?
>
> >> If the virtio/virtgpu UAPI was build around the idea that this is
> >> possible then it is most likely fundamental broken.
> > How else can you envision mmap'ing to guest userspace working?
>
> Well long story short: You can't.
>
> See userspace mappings are not persistent, but rather faulted in on
> demand. The exporter is responsible for setting those up to be able to
> add reverse tracking and so can invalidate those mappings when the
> backing store changes.
I think that is not actually a problem. At least for how it works on
arm64 but I'm almost positive x86 is similar.. I'm not sure how else
you could virtualize mmu/iommu/etc in a way that didn't have horrible
performance.
There are two levels of pagetable translation, the first controlled by
the host kernel, the second by the guest. From the PoV of host
kernel, it is just memory mapped to userspace, getting faulted in on
demand, just as normal. First the guest controlled translation
triggers a fault in the guest which sets up guest mapping. And then
the second level of translation to translate from what guest sees as
PA (but host sees as VA) to actual PA triggers a fault in the host.
>
> > The guest kernel is the one that controls the guest userspace pagetables,
> > not the host kernel. I guess your complaint is about VMs in general,
> > but unfortunately I don't think you'll convince the rest of the
> > industry to abandon VMs ;-)
>
> I'm not arguing against the usefulness of VM, it's just that what you
> describe here technically is just utterly nonsense as far as I can tell.
>
> I have to confess that I'm totally lacking how this KVM mapping works,
> but when the struct pages pointers from the sg_table are not used I see
> two possibilities what was implemented here:
>
> 1. KVM is somehow walking the page tables to figure out what to map into
> the guest VM.
it is just mapping host VA to the guest.. the guest kernel sees this
as the PA and uses the level of pgtable translation that it controls
to map to guest userspace. *All* that is needed (which this patch
provides) is the correct cache attributes.
> This would be *HIGHLY* illegal and not just with DMA-buf, but with
> pretty much a whole bunch of other drivers/subsystems as well.
> In other words it would be trivial for the guest to take over the
> host with that because it doesn't take into account that the underlying
> backing store of DMA-buf and other mmaped() areas can change at any time.
>
> 2. The guest VM triggers the fault handler for the mappings to fill in
> their page tables on demand.
>
> That would actually work with DMA-buf, but then the guest needs to
> somehow use the caching attributes from the host side and not use it's own.
This is basically what happens, although via the two levels of pgtable
translation. This patch provides the missing piece, the caching
attributes.
> Because otherwise you can't accommodate that the exporter is
> changing those caching attributes.
Changing the attributes dynamically isn't going to work.. or at least
not easily. If you had some sort of synchronous notification to host
userspace, it could trigger an irq to the guest, I suppose. But it
would mean host kernel has to block waiting for host userspace to
interrupt the guest, then wait for guest vgpu process to be scheduled
and handle the irq.
At least in the case of msm, the cache attributes are static for the
life of the buffer, so this scenario isn't a problem. AFAICT this
should work fine for at least all UMA hw.. I'm a bit less sure when it
comes to TTM, but shouldn't you at least be able to use worst-cache
cache attributes for buffers that are allowed to be mapped to guest?
BR,
-R
>
> > But more seriously, let's take a step back here.. what scenarios are
> > you seeing this being problematic for? Then we can see how to come up
> > with solutions. The current situation of host userspace VMM just
> > guessing isn't great.
>
> Well "isn't great" is a complete understatement. When KVM/virtio/virtgpu
> is doing what I guess they are doing here then that is a really major
> security hole.
>
> > And sticking our heads in the sand and
> > pretending VMs don't exist isn't great. So what can we do? I can
> > instead add a msm ioctl to return this info and solve the problem even
> > more narrowly for a single platform. But then the problem still
> > remains on other platforms.
>
> Well once more: This is *not* MSM specific, you just absolutely *can't
> do that* for any driver!
>
> I'm just really wondering what the heck is going on here, because all of
> this was discussed in lengthy before on the mailing list and very
> bluntly rejected.
>
> Either I'm missing something (that's certainly possible) or we have a
> strong case of somebody implementing something without thinking about
> all the consequences.
>
> Regards,
> Christian.
>
>
> >
> > Slightly implicit in this is that mapping dma-bufs to the guest won't
> > work for anything that requires DMA_BUF_IOCTL_SYNC for coherency.. we
> > could add a possible return value for DMA_BUF_INFO_VM_PROT indicating
> > that the buffer does not support mapping to guest or CPU access
> > without DMA_BUF_IOCTL_SYNC. Then at least the VMM can fail gracefully
> > instead of subtly.
> >
> > BR,
> > -R
>
More information about the dri-devel
mailing list