[PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
Tian, Kevin
kevin.tian at intel.com
Tue Jan 16 00:45:56 UTC 2024
> From: Jason Gunthorpe <jgg at nvidia.com>
> Sent: Tuesday, January 16, 2024 12:31 AM
>
> On Tue, Jan 09, 2024 at 10:11:23AM +0800, Yan Zhao wrote:
>
> > > Well, for instance, when you install pages into the KVM the hypervisor
> > > will have taken kernel memory, then zero'd it with cachable writes,
> > > however the VM can read it incoherently with DMA and access the
> > > pre-zero'd data since the zero'd writes potentially hasn't left the
> > > cache. That is an information leakage exploit.
> >
> > This makes sense.
> > How about KVM doing cache flush before installing/revoking the
> > page if guest memory type is honored?
>
> I think if you are going to allow the guest to bypass the cache in any
> way then KVM should fully flush the cache before allowing the guest to
> access memory and it should fully flush the cache after removing
> memory from the guest.
For GPU passthrough can we rely on the fact that the entire guest memory
is pinned so the only occurrence of removing memory is when killing the
guest then the pages will be zero-ed by mm before next use? then we
just need to flush the cache before the 1st guest run to avoid information
leak.
yes it's a more complex issue if allowing guest to bypass cache in a
configuration mixing host mm activities on guest pages at run-time.
>
> Noting that fully removing the memory now includes VFIO too, which is
> going to be very hard to co-ordinate between KVM and VFIO.
if only talking about GPU passthrough do we still need such coordination?
>
> ARM has the hooks for most of this in the common code already, so it
> should not be outrageous to do, but slow I suspect.
>
> Jason
More information about the dri-devel
mailing list