[Intel-gfx] [PATCH 05/10] vfio: Use IOMMU_CAP_ENFORCE_CACHE_COHERENCY for vfio_file_enforced_coherent()
Jason Gunthorpe
jgg at nvidia.com
Tue Oct 25 18:17:11 UTC 2022
iommufd doesn't establish the iommu_domains until after the device FD is
opened, even if the container has been set. This design is part of moving
away from the group centric iommu APIs.
This is fine, except that the normal sequence of establishing the kvm
wbindv won't work:
group = open("/dev/vfio/XX")
ioctl(group, VFIO_GROUP_SET_CONTAINER)
ioctl(kvm, KVM_DEV_VFIO_GROUP_ADD)
ioctl(group, VFIO_GROUP_GET_DEVICE_FD)
As the domains don't start existing until GET_DEVICE_FD. Further,
GET_DEVICE_FD requires that KVM_DEV_VFIO_GROUP_ADD already be done as that
is what sets the group->kvm and thus device->kvm for the driver to use
during open.
Now that we have device centric cap ops and the new
IOMMU_CAP_ENFORCE_CACHE_COHERENCY we know what the iommu_domain will be
capable of without having to create it. Use this to compute
vfio_file_enforced_coherent() and resolve the ordering problems.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
drivers/vfio/container.c | 5 +++--
drivers/vfio/vfio.h | 2 --
drivers/vfio/vfio_main.c | 27 ++++++++++++++-------------
3 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/vfio/container.c b/drivers/vfio/container.c
index 499777930b08fa..d97747dfb05d02 100644
--- a/drivers/vfio/container.c
+++ b/drivers/vfio/container.c
@@ -188,8 +188,9 @@ void vfio_device_container_unregister(struct vfio_device *device)
device->group->container->iommu_data, device);
}
-long vfio_container_ioctl_check_extension(struct vfio_container *container,
- unsigned long arg)
+static long
+vfio_container_ioctl_check_extension(struct vfio_container *container,
+ unsigned long arg)
{
struct vfio_iommu_driver *driver;
long ret = 0;
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index 54e5a8e0834ccb..247590334e14b0 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -119,8 +119,6 @@ int vfio_container_attach_group(struct vfio_container *container,
void vfio_group_detach_container(struct vfio_group *group);
void vfio_device_container_register(struct vfio_device *device);
void vfio_device_container_unregister(struct vfio_device *device);
-long vfio_container_ioctl_check_extension(struct vfio_container *container,
- unsigned long arg);
int __init vfio_container_init(void);
void vfio_container_cleanup(void);
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 1e414b2c48a511..a8d1fbfcc3ddad 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -1625,24 +1625,25 @@ EXPORT_SYMBOL_GPL(vfio_file_is_group);
bool vfio_file_enforced_coherent(struct file *file)
{
struct vfio_group *group = file->private_data;
- bool ret;
+ struct vfio_device *device;
+ bool ret = true;
if (!vfio_file_is_group(file))
return true;
- mutex_lock(&group->group_lock);
- if (group->container) {
- ret = vfio_container_ioctl_check_extension(group->container,
- VFIO_DMA_CC_IOMMU);
- } else {
- /*
- * Since the coherency state is determined only once a container
- * is attached the user must do so before they can prove they
- * have permission.
- */
- ret = true;
+ /*
+ * If the device does not have IOMMU_CAP_ENFORCE_CACHE_COHERENCY then
+ * any domain later attached to it will also not support it.
+ */
+ mutex_lock(&group->device_lock);
+ list_for_each_entry(device, &group->device_list, group_next) {
+ if (!device_iommu_capable(device->dev,
+ IOMMU_CAP_ENFORCE_CACHE_COHERENCY)) {
+ ret = false;
+ break;
+ }
}
- mutex_unlock(&group->group_lock);
+ mutex_unlock(&group->device_lock);
return ret;
}
EXPORT_SYMBOL_GPL(vfio_file_enforced_coherent);
--
2.38.0
More information about the Intel-gfx
mailing list