[PATCH v2 00/11] Connect VFIO to IOMMUFD
Matthew Rosato
mjrosato at linux.ibm.com
Tue Nov 15 01:16:57 UTC 2022
On 11/9/22 11:57 AM, Jason Gunthorpe wrote:
> On Tue, Nov 08, 2022 at 11:18:03PM +0800, Yi Liu wrote:
>> On 2022/11/8 17:19, Nicolin Chen wrote:
>>> On Mon, Nov 07, 2022 at 08:52:44PM -0400, Jason Gunthorpe wrote:
>>>
>>>> This is on github: https://github.com/jgunthorpe/linux/commits/vfio_iommufd
>>> [...]
>>>> v2:
>>>> - Rebase to v6.1-rc3, v4 iommufd series
>>>> - Fixup comments and commit messages from list remarks
>>>> - Fix leaking of the iommufd for mdevs
>>>> - New patch to fix vfio modaliases when vfio container is disabled
>>>> - Add a dmesg once when the iommufd provided /dev/vfio/vfio is opened
>>>> to signal that iommufd is providing this
>>>
>>> I've redone my previous sanity tests. Except those reported bugs,
>>> things look fine. Once we fix those issues, GVT and other modules
>>> can run some more stressful tests, I think.
>>
>> our side is also starting test (gvt, nic passthrough) this version. need to
>> wait a while for the result.
>
> I've updated the branches with the two functional fixes discussed on
> the list plus all the doc updates.
>
For s390, tested vfio-pci against some data mover workloads using QEMU on s390x with CONFIG_VFIO_CONTAINER=y and =n using zPCI interpretation assists (over ism/SMC-D, mlx5 and NVMe) and without zPCI interpretation assists (over mlx5 and NVMe) - will continue testing with more aggressive workloads.
(I did not run with CONFIG_IOMMUFD_TEST other than when building the selftest, but I see you mentioned this to Yi -- I'll incorporate that setting into future runs.)
Ran the self-tests on s390 in LPAR and within a QEMU guest -- all tests pass (used 1M hugepages)
Did light regression testing of vfio-ap and vfio-ccw on s390x with CONFIG_VFIO_CONTAINER=y and =n.
Didn't see it in your branch yet, but also verified the proposed change to iommufd_fill_cap_dma_avail (.avail = U32_MAX) would work as expected.
Tested-by: Matthew Rosato <mjrosato at linux.ibm.com>
More information about the dri-devel
mailing list