[virglrenderer-devel] Debian packaging, project status?
Dave Airlie
airlied at gmail.com
Mon Mar 7 02:38:34 UTC 2016
On 5 March 2016 at 00:57, Daniel Pocock <daniel at pocock.pro> wrote:
>
>
> On 04/03/16 14:01, Gerd Hoffmann wrote:
>> Hi,
>>
>>> Does it use any functionality of the GPU on the host (or is that the aim
>>> in future) or does it do everything in the host CPU?
>>
>> Yes, it uses the host gpu.
>>
>>>> Nothing is known on this. No other driver can be used. Red Hat has an
>>>> interest in doing that work at some point but there is no timeline, and no
>>>> real work been done, so if anyone wanted to learn about writing WDDM
>>>> drivers here's a good time to learn!.
>>>>
>>>
>>> Have you thought about advertising that for a GSoC student project?
>>> Student application deadline is 25 March, so now would be the time to
>>> advertise it. It could potentially go under any of these communities:
>>>
>>> http://qemu-project.org/Google_Summer_of_Code_2016
>>
>> I suspect it's too big for a gsoc project.
>>
>
> Is there any portion of the work, such as porting some library, that
> could be feasible?
No not really, you have to write a complete Windows graphics driver, not many
people exist capable of that, and I doubt any of them will be doing GSoC.
A windows driver is probably a 2-3 man year project for experienced developers.
>>> Have you also seen the KVMGT project, how do your aims compare to that?
>>> https://events.linuxfoundation.org/sites/events/files/slides/KVMGT-a%20Full%20GPU%20Virtualization%20Solution_1.pdf
>>
>>> They only support Intel Haswell right now.
>>
>> Haswell support is proof-of-concept only will not go upstream.
>> Broadwell support is available too meanwhile and will be merged
>> upstream.
>> Skylake support is WIP still.
>>
>> virtio-gpu will run on pretty much anything.
>>
>> kvmgt will run on supported intel hardware only, and the underlying
>> hardware is not transparent for the guest (i.e. when running on
>> broadwell host the guest will see a broadwell igd). On the flip side it
>> will most likely deliver better performance than virtio-gpu on the same
>> hardware.
>>
>> The rendering/output path (spice integration etc) will be the same for
>> both virtio-gpu and kvmgt.
>>
>
> Thanks for that feedback, it is really helpful.
>
> I've created a link to this thread from the Debian bug tracker and wiki
> so other people can discover what you are doing and look at getting
> involved or testing.
To clarify what Gerd said.
There are 3 ways of getting a GPU inside qemu:
a: dedicated GPU passthrough
b: KVMGT type HW passthrough
c: paravirt graphics card
They all have different use cases and advantages/disadvantages, they
don't "compete" as such they are orthogonal, you really need to
support all of them.
virtio-gpu is targeted and making (c) only, and trying to make it as
efficient as possible.
a,b require specific hw setups and a lot more up front knowledge to setup
at least so far.
Dave.
More information about the virglrenderer-devel
mailing list