Implementing Miracast?

Martin Peres martin.peres at free.fr
Tue Dec 8 08:39:40 PST 2015


On 08/12/15 13:59, David Herrmann wrote:
> Hi
>
> On Fri, Dec 4, 2015 at 9:07 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
>> On Thu, Dec 03, 2015 at 07:26:31PM +0200, Martin Peres wrote:
>>> You are right Ilia, this is indeed what Jaakko and I had in mind, but they
>>> did not re-use the fuse/cuse framework to do the serialization of the
>>> ioctls.
>>>
>>> Not sure what we can do against allowing proprietary drivers to use this
>>> feature though :s To be fair, nothing prevents any vendor to do this shim
>>> themselves and nvidia definitely did it, and directly called their
>>> closed-source driver.
>>>
>>> Any proposition on how to handle this case? I guess we could limit that to
>>> screens only, no rendering. That would block any serious GPU manufacturer
>>> from using this code even if any sane person would never write a driver in
>>> the userspace...
>> Hm for virtual devices like this I figured there's no point exporting the
>> full kms api to userspace, but instead we'd just need a simple kms driver
>> with just 1 crtc and 1 connector per drm_device. Plus a special device
>> node (v4l is probably inappropriate since it doesn't do damage) where the
>> miracast userspace can receive events with just the following information:
>> - virtual screen size
>> - fd to the underlying shmem node for the current fb. Or maybe a dma-buf
>>    (but then we'd need the dma-buf mmap stuff to land first).
>> - damage tracking
>>
>> If we want fancy, we could allow userspace to reply (through an ioctl)
>> when it's done reading the previous image, which the kernel could then
>> forward as vblank complete events.
>>
>> Connector configuration could be done by forcing the outputs (we'll send
>> out uevents nowadays for that), so the only thing we need is some configfs
>> to instantiate new copies of this.
>>
>> At least for miracst (as opposed to full-blown hw drivers in userspace) I
>> don't think we need to export everything.
> I looked into all this when working on WFD, but I cannot recommend
> going down that road. First of all, you still need heavy modifications
> for gnome-shell, kwin, and friends, as neither of them supports
> seamless drm-device hotplugging.

That would still be needed for USB GPUs though. Seems like metacity had 
no probs
in 2011, but no idea how heavily patched it was:
https://www.youtube.com/watch?v=g54y80blzRU

Airlied?

> Hence, providing more devices than
> the main GPU just confuses them. Secondly, you really don't win much
> by re-using DRM for all that. On the contrary, you get very heavy
> overhead, need to feed all this through limited ioctl interfaces, and
> fake DRM crtcs/encoders/connectors, when all you really have is an
> mpeg stream.
The overhead is just at init time, is that really relevant? The only 
added cost
could then be the page flip ioctl which is not really relevant again 
since it is only up
to 60 times per second in usual monitors.

> I wouldn't mind if anyone writes a virtual DRM interface, it'd be
> really great for automated testing. However, if you want your
> wifi-display (or whatever else) integrated into desktop environments,
> then I recommend teaching those environments to accept gstreamer sinks
> as outputs.

That is a fair proposal but that requires a lot more work for 
compositors than
waiting for drm udev events and reusing all the existing infrastructure 
for DRM
to drive the new type of display.

I guess there are benefits to being able to output to a gstreamer 
backend, but
the drm driver we propose could do just that without having to ask for a 
lot of
new code, especially code that is already necessary for handling USB GPUs.
Moreover, the gstreamer backend would not be registered as a screen by X 
which
means that games may not be able to set themselves fullscreen on this 
screen only.

I am open to the idea of having compositors render to a gstreamer 
backend, but
I never worked with gstreamer myself so I have no clue about how suited 
it is for
output management (resolution, refresh rate) and there is the added 
difficulty of
the X model not working well with this approach. We will have a look at 
this though.

Martin


More information about the dri-devel mailing list