[RFC PATCH 00/12] Private MMIO support for private assigned dev
Jason Gunthorpe
jgg at nvidia.com
Wed May 14 16:33:39 UTC 2025
On Wed, May 14, 2025 at 03:02:53PM +0800, Xu Yilun wrote:
> > We have an awkward fit for what CCA people are doing to the various
> > Linux APIs. Looking somewhat maximally across all the arches a "bind"
> > for a CC vPCI device creation operation does:
> >
> > - Setup the CPU page tables for the VM to have access to the MMIO
>
> This is guest side thing, is it? Anything host need to opt-in?
CPU hypervisor page tables.
> > - Revoke hypervisor access to the MMIO
>
> VFIO could choose never to mmap MMIO, so in this case nothing to do?
Yes, if you do it that way.
> > - Setup the vIOMMU to understand the vPCI device
> > - Take over control of some of the IOVA translation, at least for T=1,
> > and route to the the vIOMMU
> > - Register the vPCI with any attestation functions the VM might use
> > - Do some DOE stuff to manage/validate TDSIP/etc
>
> Intel TDX Connect has a extra requirement for "unbind":
>
> - Revoke KVM page table (S-EPT) for the MMIO only after TDISP
> CONFIG_UNLOCK
Maybe you could express this as the S-EPT always has the MMIO mapped
into it as long as the vPCI function is installed to the VM? Is KVM
responsible for the S-EPT?
> Another thing is, seems your term "bind" includes all steps for
> shared -> private conversion.
Well, I was talking about vPCI creation. I understand that during the
vPCI lifecycle the VM will do "bind" "unbind" which are more or less
switching the device into a T=1 mode. Though I understood on some
arches this was mostly invisible to the hypervisor?
> But in my mind, "bind" only includes
> putting device in TDISP LOCK state & corresponding host setups required
> by firmware. I.e "bind" means host lockes down the CC setup, waiting for
> guest attestation.
So we will need to have some other API for this that modifies the vPCI
object.
It might be reasonable to have VFIO reach into iommufd to do that on
an already existing iommufd VDEVICE object. A little weird, but we
could probably make that work.
But you have some weird ordering issues here if the S-EPT has to have
the VFIO MMIO then you have to have a close() destruction order that
sees VFIO remove the S-EPT and release the KVM, then have iommufd
destroy the VDEVICE object.
> > It doesn't mean that iommufd is suddenly doing PCI stuff, no, that
> > stays in VFIO.
>
> I'm not sure if Alexey's patch [1] illustates your idea. It calls
> tsm_tdi_bind() which directly does device stuff, and impacts MMIO.
> VFIO doesn't know about this.
>
> I have to interpret this as VFIO firstly hand over device CC features
> and MMIO resources to IOMMUFD, so VFIO never cares about them.
>
> [1] https://lore.kernel.org/all/20250218111017.491719-15-aik@amd.com/
There is also the PCI layer involved here and maybe PCI should be
participating in managing some of this. Like it makes a bit of sense
that PCI would block the FLR on platforms that require this?
Jason
More information about the dri-devel
mailing list