refactor the i915 GVT support

Wang, Zhi A zhi.a.wang at intel.com
Thu Jul 29 08:19:15 UTC 2021


On 7/28/2021 8:59 PM, Jason Gunthorpe wrote:
> On Wed, Jul 28, 2021 at 01:38:58PM +0000, Wang, Zhi A wrote:
>
>> I guess those APIs you were talking about are KVM-only. For other
>> hypervisors, e.g. Xen, ARCN cannot use the APIs you mentioned. Not
>> sure if you have already noticed that VFIO is KVM-only right now.
> There is very little hard connection between VFIO and KVM, so no, I
> don't think that is completely true.
>
> In an event, an in-tree version of other hypervisor support for GVT
> needs to go through enabling VFIO support so that the existing API
> multiplexers we have can be used properly, not adding a shim layer
> trying to recreate VFIO inside a GPU driver.

We were delivering the presentation of GVT-g in Xen summit 2018 and we 
were thinking and talking about supporting VFIO in Xen during the 
presentation (the video can be found from Youtube). But we didn't see 
any motiviation from the Xen community to adopt it.

If people take a look into the code in QEMU, in the PCI-passthrough 
part, Xen is actually not using VFIO even nowadays. We would be glad to 
see someone can influence on that part, especically making all the 
in-kernel hypervisor to use VFIO in PCI-passthrough and supporting mdev. 
That would be a huge benefit for all the users.

>> GVT-g is designed for many hypervisors not only KVM. In the design,
>> we implemented an abstraction layer for different hypervisors. You
>> can check the link in the previous email which has an example of how
>> the MPT module "xengt" supports GVT-g running under Xen.  For
>> example, injecting a msi in VFIO/KVM is via playing with
>> eventfd. But in Xen, we need to issue a hypercall from Dom0.
> This is obviously bad design, Xen should plug into the standardized
> eventfd scheme as well and trigger its hypercall this way. Then it can
> integrate with the existing VFIO interrupt abstraction infrastructure.
>
>> others, like querying mappings between GFN and HFN.
> This should be done through VFIO containers, there is nothing KVM
> specific there.
>
>> As you can see, to survive from this situation, we have to rely on
>> an abstraction layer so that we can prevent introducing coding
>> blocks like in the core logic:
> No, you have to fix the abstractions we already have to support the
> matrix of things you care about. If this can't be done then maybe we
> can add new abstractions, but abstractions like this absoultely should
> not be done inside drivers.
>
> Jason

That's a good point and we were actually thinking about this before and 
I believe that's the correct direction. But just like the situation 
mentioned above, it would be nice if people can really put a great 
influence on all in-kernel hypervisors to use VFIO which can really 
benefit all the users.

For now, we are just going to take christoph's patches.

Zhi



More information about the dri-devel mailing list