[Mesa-dev] [PATCH RFC 0/2] GBM API extension to support fusing KMS and render devices

Thierry Reding thierry.reding at gmail.com
Mon Mar 7 10:19:22 UTC 2016


On Mon, Mar 07, 2016 at 10:46:52AM +0100, Lucas Stach wrote:
> Am Freitag, den 04.03.2016, 18:34 +0000 schrieb Emil Velikov:
> > On 4 March 2016 at 17:38, Lucas Stach <l.stach at pengutronix.de> wrote:
> > > Am Freitag, den 04.03.2016, 17:20 +0000 schrieb Daniel Stone:
> > >> Hi,
> > >>
> > >> On 4 March 2016 at 16:08, Lucas Stach <l.stach at pengutronix.de> wrote:
> > >> > Am Freitag, den 04.03.2016, 15:09 +0000 schrieb Daniel Stone:
> > >> >> Thanks for taking this on, it looks really good! I just have the one
> > >> >> question though - did you look at the EGLDevice extension? Using that
> > >> >> to enumerate the GPUs, we could create the gbm_device using the KMS
> > >> >> device and pass that in to the EGLDisplay, with an additional attrib
> > >> >> to pass in an EGLDevice handle to eglGetPlatformDisplay. This could
> > >> >> possibly be better since it is more independent of DRM as the API, and
> > >> >> also allows people to share device enumeration/selection code with
> > >> >> other platforms (e.g. choosing between multiple GPUs when using a
> > >> >> winsys like Wayland or X11).
> > >> >>
> > >> > I have not looked at this in detail yet, but I think it's just an
> > >> > extension to the interface outlined by this series.
> > >> >
> > >> > If we require the KMS device to have a DRI2/Gallium driver it should be
> > >> > easy to hook up the EGLDevice discovery for them.
> > >> > Passing in a second device handle for the KMS device is then just the
> > >> > EGL implementation calling gbm_device_set_kms_provider() on the render
> > >> > GBM device, instead of the application doing it manually.
> > >>
> > >> It turns the API backwards a bit though ...
> > >>
> > >> Right now, what we require is that the GBM device passed in is the KMS
> > >> device, not the GPU device; what you're suggesting is that we discover
> > >> the GPU device and then add the KMS device.
> > >>
> > >> So, with your proposal:
> > >> gbm_gpu = gbm_device_create("/dev/dri/renderD128");
> > >> egl_dpy = eglGetDisplay(gbm_gpu);
> > >> gbm_kms = gbm_device_create("/dev/dri/card0");
> > >> gbm_device_set_kms_provider(gbm_gpu, gbm_kms);
> > >>
> > >> i.e. the device the user creates first is the GPU device.
> > >>
> > >> With EGLDevice, we would have:
> > >> gbm_kms = gbm_device_create("/dev/dri/card0");
> > >> egl_gpus = eglGetDevicesEXT();
> > >> egl_dpy = eglGetPlatformDisplay(gbm_kms, { EGL_TARGET_DEVICE, egl_gpus[0] });
> > >>
> > >> So, the first/main device the user deals with is the KMS device - same
> > >> as today. This makes sense, since GBM is the allocation API for KMS,
> > >> and EGL should be the one dealing with the GPU ...
> > >>
> > > Right, my API design was from my view of GBM being the API to bootstrap
> > > EGL rendering, but defining it as the KMS allocation API makes a lot
> > > more sense, when you think about it.
> > >
> > >> Maybe it would make sense to reverse the API, so rather than creating
> > >> a GBM device for the GPU and then linking that to the KMS device -
> > >> requiring users to make different calls, e.g. gbm_bo_get_kms_bo(),
> > >> which makes it harder to use and means we need to port current users -
> > >> we create a GBM device for KMS and then link that to a GPU device.
> > >> This would then mean that eglGetPlatformDisplay could do the linkage
> > >> internally, and then existing users using gbm_bo_get_handle() etc
> > >> would still work without needing any different codepaths.
> > >
> > > Yes, this will make the implementation inside GBM a bit more involved,
> > > but it seems more natural this way around when thinking about hooking it
> > > up to EGLDevice. I'll try it out and send an updated RFC after the
> > > weekend.
> > >
> > While I'm more inclined to Daniel's suggestion, I wonder why people
> > moved away from Thierry's approach - creating a composite/wrapped dri
> > module ? Is there anything wrong with it - be that from technical or
> > conceptual POV ?
> > 
> The wrapped driver takes away the ability of the application to decide
> which GPUs to bind together - at least if you want to keep things
> tightly coupled at that level.

That was actually the prime objective of the patches I posted back at
the time. =)

> The point of the explicit application control is that we not only solve
> the "SoCs have split render/scanout devices" issue, but gain an API for
> compositors to work properly on PRIME laptop configurations with
> render/render/scanout. We don't want any autodetection to happen there,
> a compositor may well decide to use the Intel GPU as scanout only and do
> all composition on the discreet GPU. Having a tightly coupled wrapped
> driver for every device combination is not really where we want to go,
> right?

To be honest, I don't think we have much of a choice. Most bare-metal
applications don't make a distinction between render and scanout. They
will simply assume that you can do both on the same device, because
that's what their development machine happens to have. So unless we
make a deliberate decision not to support most applications out there,
what other options do we have?

While I agree it's good to have an API to allow explicit control over
association of render to scanout nodes, I think that we really want
both. In addition to giving users the flexibility if they request it,
I think we want to give them a sensible default if they don't care.

Especially on systems where there usually isn't a reason to care. Most
modern SoCs would never want explicit control over the association
because there usually is only a single render node and a single scanout
node in the system.

> > I believe it has a few advantages over the above two proposals - it
> > allows greater flexibility as both drivers will be tightly coupled and
> > can communicate directly, does not expand the internal/hidden ABI that
> > we currently have between GBM and EGL, could (in theory) work with
> > GLX.
> 
> As said above: if you you want to bind arbitrary combinations of drivers
> together you need to move away from tight coupling to a shared interface
> anyway. I don't see how having this interface inside a wrapped driver
> instead of GBM help in any way, it's a MESA internal interface anyways.
> 
> We don't need any of this for GLX. Etnaviv is working fine with GLX on
> both imx-drm and armada-drm, as the DDX does all the work when binding
> devices together in that case.

In this case DDX will take the role of the wrapped driver. So you'd end
up with duplication of the "glue" in both Mesa and the DDX, don't you?

Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20160307/91660197/attachment-0001.sig>


More information about the mesa-dev mailing list