[PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices

Yan Zhao yan.y.zhao at intel.com
Mon Jan 8 06:02:57 UTC 2024


On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > This series allow user space to notify KVM of noncoherent DMA status so as
> > to let KVM honor guest memory types in specified memory slot ranges.
> > 
> > Motivation
> > ===
> > A virtio GPU device may want to configure GPU hardware to work in
> > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> 
> Does this mean some DMA reads do not snoop the caches or does it
> include DMA writes not synchronizing the caches too?
Both DMA reads and writes are not snooped.
The virtio host side will mmap the buffer to WC (pgprot_writecombine)
for CPU access and program the device to access the buffer in uncached
way.
Meanwhile, virtio host side will construct a memslot in KVM with the PTR
returned from the mmap, and notify virtio guest side to mmap the same buffer in
guest page table with PAT=WC, too.

> 
> > This is generally for performance consideration.
> > In certain platform, GFX performance can improve 20+% with DMAs going to
> > noncoherent path.
> > 
> > This noncoherent DMA mode works in below sequence:
> > 1. Host backend driver programs hardware not to snoop memory of target
> >    DMA buffer.
> > 2. Host backend driver indicates guest frontend driver to program guest PAT
> >    to WC for target DMA buffer.
> > 3. Guest frontend driver writes to the DMA buffer without clflush stuffs.
> > 4. Hardware does noncoherent DMA to the target buffer.
> > 
> > In this noncoherent DMA mode, both guest and hardware regard a DMA buffer
> > as not cached. So, if KVM forces the effective memory type of this DMA
> > buffer to be WB, hardware DMA may read incorrect data and cause misc
> > failures.
> 
> I don't know all the details, but a big concern would be that the
> caches remain fully coherent with the underlying memory at any point
> where kvm decides to revoke the page from the VM.
Ah, you mean, for page migration, the content of the page may not be copied
correctly, right?

Currently in x86, we have 2 ways to let KVM honor guest memory types:
1. through KVM memslot flag introduced in this series, for virtio GPUs, in
   memslot granularity.
2. through increasing noncoherent dma count, as what's done in VFIO, for
   Intel GPU passthrough, for all guest memory.

This page migration issue should not be the case for virtio GPU, as both host
and guest are synced to use the same memory type and actually the pages
are not anonymous pages.
For GPU pass-through, though host mmaps with WB, it's still fine for guest to
use WC because page migration on pages of VMs with pass-through device is not
allowed.

But I agree, this should be a case if user space sets the memslot flag to honor
guest memory type to memslots for guest system RAM where non-enlightened guest
components may cause guest and host to access with different memory types.
Or simply when the guest is a malicious one.

> If you allow an incoherence of cache != physical then it opens a
> security attack where the observed content of memory can change when
> it should not.

In this case, will this security attack impact other guests?

> 
> ARM64 has issues like this and due to that ARM has to have explict,
> expensive, cache flushing at certain points.
>





More information about the dri-devel mailing list