[PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices

Jason Gunthorpe jgg at nvidia.com
Mon Jan 8 15:38:18 UTC 2024


On Mon, Jan 08, 2024 at 04:25:02PM +0100, Daniel Vetter wrote:
> On Mon, Jan 08, 2024 at 10:02:50AM -0400, Jason Gunthorpe wrote:
> > On Mon, Jan 08, 2024 at 02:02:57PM +0800, Yan Zhao wrote:
> > > On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> > > > On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > > > > This series allow user space to notify KVM of noncoherent DMA status so as
> > > > > to let KVM honor guest memory types in specified memory slot ranges.
> > > > > 
> > > > > Motivation
> > > > > ===
> > > > > A virtio GPU device may want to configure GPU hardware to work in
> > > > > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> > > > 
> > > > Does this mean some DMA reads do not snoop the caches or does it
> > > > include DMA writes not synchronizing the caches too?
> > > Both DMA reads and writes are not snooped.
> > 
> > Oh that sounds really dangerous.
> 
> So if this is an issue then we might already have a problem, because with
> many devices it's entirely up to the device programming whether the i/o is
> snooping or not. So the moment you pass such a device to a guest, whether
> there's explicit support for non-coherent or not, you have a
> problem.

No, the iommus (except Intel and only for Intel integrated GPU, IIRC)
prohibit the use of non-coherent DMA entirely from a VM.

Eg AMD systems 100% block non-coherent DMA in VMs at the iommu level.

> _If_ there is a fundamental problem. I'm not sure of that, because my
> assumption was that at most the guest shoots itself and the data
> corruption doesn't go any further the moment the hypervisor does the
> dma/iommu unmapping.

Who fixes the cache on the unmapping? I didn't see anything..

Jason


More information about the dri-devel mailing list