[Mesa-dev] [PATCH 10/10] egl/android: Add fallback to kms_swrast driver

Tomasz Figa tfiga at chromium.org
Thu Jul 21 05:10:52 UTC 2016


On Wed, Jul 20, 2016 at 10:34 PM, Rob Herring <robh at kernel.org> wrote:
> On Wed, Jul 20, 2016 at 12:53 AM, Tomasz Figa <tfiga at chromium.org> wrote:
>> On Wed, Jul 20, 2016 at 7:40 AM, Rob Herring <robh at kernel.org> wrote:
>>> On Fri, Jul 15, 2016 at 2:53 AM, Tomasz Figa <tfiga at chromium.org> wrote:
>>>> If no hardware driver is present, it is possible to fall back to
>>>> the kms_swrast driver with any DRI node that supports dumb GEM create
>>>> and mmap IOCTLs with softpipe/llvmpipe drivers. This patch makes the
>>>> Android EGL platform code retry probe with kms_swrast if hardware-only
>>>> probe fails.
>>>
>>> Presumably, you need a gralloc that supports this too? It would be
>>> nice to have access to it to reproduce this setup.
>>
>> Our use case is running the system in Qemu with vgem driver, so our
>> gralloc has a backend for vgem. However it should work with any
>> available card or render node (more about render nodes below), no
>> special support in gralloc really needed. It's just using kms_swrast
>> instead of the native driver.
>
> Okay, interesting. I've not really looked at vgem. GBM also has a path
> for allocating dumb buffers which I was intending to try and get
> working with s/w rendering. Sadly, I've found s/w rendering harder to
> get working than h/w rendering.

I initially investigated using the regular swrast path using DRI
swrast loader, but that was really PITA - almost no reuse of any of
the DRI2 or image loader code. Then I discovered kms_swrast, which
just works with DRI2 or image loader, so no special requirements on
EGL side, other than loading the kms_swrast driver explicitly, because
it doesn't match any DRI nodes by default.

>
>>> [...]
>>>
>>>>  #define DRM_RENDER_DEV_NAME  "%s/renderD%d"
>>>>
>>>>  static int
>>>> -droid_open_device(_EGLDisplay *dpy)
>>>> +droid_open_device(_EGLDisplay *dpy, int swrast)
>>>>  {
>>>>     struct dri2_egl_display *dri2_dpy = dpy->DriverData;
>>>>     const int limit = 64;
>>>> @@ -933,7 +936,7 @@ droid_open_device(_EGLDisplay *dpy)
>>>>        if (fd < 0)
>>>>           continue;
>>>>
>>>> -      if (!droid_probe_device(dpy, fd))
>>>> +      if (!droid_probe_device(dpy, fd, swrast))
>>>
>>> This only gets here if a render node is present and successfully
>>> opened.
>>
>> This is the case when HAS_GRALLOC_HEADERS is not defined, which means
>> only render nodes are supported. If you look at the other case, it
>> will use whatever FD was provided by gralloc using that private
>> perform call.
>>
>>> I would think in the sw rendering case, we want this to work
>>> when there's only a card node present. Furthermore, you can't do dumb
>>> allocs on a render node, so I don't see how this can work at all.
>>
>> This is only because the dumb alloc ioctl is not allowed, but that's
>> the only thing preventing it from working. We had similar restriction
>> put on mmap, but now everyone can just mmap the PRIME FD directly. We
>> actually have a patch allowing dumb alloc and mmap ioctls for render
>> nodes in our tree, because it makes things like swrast fallback much,
>> much easier and doesn't seem to be harmful at all. It might be worth
>> discussing this again on dri-devel mailing list.
>
> Yes, bypassing permissions is an easy hack, but I believe what's in
> place is by design and you are unlikely to change that.

Sometimes things are broken by design. I wouldn't take any already
existing design as something carved in stone. ;)

Still, I need a good understanding of the decision behind this
restriction and that's why we are discussing this. Obviously if there
is a better way to do things, which wouldn't require changing this
behavior, I'm open for suggestions. It doesn't sound reasonable to me,
though, to keep the status quo, even if there is no other reasonable
way to achieve desired results.

> The answer
> always seems to be don't use dumb buffers...

This leaves me not very convinced. As far as my understanding goes,
being able to use render nodes for software rendering should be
superior to control nodes in terms of security. Considering that
render nodes allow allocating GEM buffers anyway, using
driver-specific IOCTLs, exposing dumb alloc IOCTL doesn't seem to pose
any security issue.

We don't actually strictly need to use render nodes. We could hack
control nodes to not require the authentication dance for prime import
and export ioctls (I fail to understand this restriction as well,
since render nodes can do so freely). However AFAICT the proper way
forward is to leave control nodes only for KMS purposes and use render
nodes for anything else.

Best regards,
Tomasz


More information about the mesa-dev mailing list