Split render/display SoCs, Mesa's renderonly, and Wayland dmabuf hints

Eric Anholt eric at anholt.net
Tue Apr 20 23:11:04 UTC 2021


On Tue, Apr 20, 2021 at 3:18 AM Daniel Stone <daniel at fooishbar.org> wrote:
>
> Hi,
>
> On Mon, 19 Apr 2021 at 13:06, Simon Ser <contact at emersion.fr> wrote:
>>
>> I'm working on a Wayland extension [1] that, among other things, allows
>> compositors to advertise the preferred device to be used by Wayland
>> clients.
>>
>> In general, compositors will send a render node. However, in the case
>> of split render/display SoCs, things get a little bit complicated.
>>
>> [...]
>
>
> Thanks for the write-up Simon!
>
>>
>> There are a few solutions:
>>
>> 1. Require compositors to discover the render device by trying to import
>>    a buffer. For each available render device, the compositor would
>>    allocate a buffer, export it as a DMA-BUF, import it to the
>>    display-only device, then try to drmModeAddFB.
>
>
> I don't think this is actually tractable? Assuming that 'allocate a buffer' means 'obtain a gbm_device for the render node directly and allocate a gbm_bo from it', even with compatible formats and modifiers this will fail for more restrictive display hardware. imx-drm and pl111 (combined with vc4 on some Raspberry Pis) will fail this, since they'll take different allocation paths when they're bound through kmsro vs. directly, accounting for things like contiguous allocation. So we'd get false negatives on at least some platforms.
>
>>
>> 2. Allow compositors to query the render device magically opened by
>>    kmsro. This could be done either via EGL_EXT_device_drm, or via a
>>    new EGL extension.
>
>
> This would be my strong preference, and I don't entirely understand anholt's pushback here. The way I see it, GBM is about allocation for scanout, and EGL is about rendering. If, on a split GPU/display system, we create a gbm_device from a KMS display-only device node, then creating an EGLDisplay from that magically binds us to a completely different DRM GPU node, and anything using that EGLDisplay will use that GPU device to render.
>
> Being able to discover the GPU device node through the device query is really useful, because it tells us exactly what implicit magic EGL did under the hood, and about the device that EGL will use. Being able to discover the display node is much less useful; it does tell us how GBM will allocate buffers, but the user already knows which device is in use because they supplied it to GBM. I see the display node as a property of GBM, and the GPU node as a property of EGL, even if EGL does do (*waves hands*) stuff under the hood to ensure the two are compatible.

I guess if we're assuming that the caller definitely knows about the
display device and is asking EGL for the render node in order to do
smarter buffer sharing between display and render, I can see it.  My
objection was that getting the render node in that discussion was
apparently some workaround for other brokenness, and was going to
result in software that didn't work on pl111 and vc4 displays because
it was trying to dodge kmsro.


More information about the wayland-devel mailing list