[Mesa-dev] [PATCH RFC 0/2] GBM API extension to support fusing KMS and render devices

Lucas Stach l.stach at pengutronix.de
Mon Mar 7 10:52:12 UTC 2016


Am Montag, den 07.03.2016, 11:19 +0100 schrieb Thierry Reding:
> On Mon, Mar 07, 2016 at 10:46:52AM +0100, Lucas Stach wrote:
> > Am Freitag, den 04.03.2016, 18:34 +0000 schrieb Emil Velikov:
> > > On 4 March 2016 at 17:38, Lucas Stach <l.stach at pengutronix.de> wrote:
> > > > Am Freitag, den 04.03.2016, 17:20 +0000 schrieb Daniel Stone:
> > > >> Hi,
> > > >>
> > > >> On 4 March 2016 at 16:08, Lucas Stach <l.stach at pengutronix.de> wrote:
> > > >> > Am Freitag, den 04.03.2016, 15:09 +0000 schrieb Daniel Stone:
> > > >> >> Thanks for taking this on, it looks really good! I just have the one
> > > >> >> question though - did you look at the EGLDevice extension? Using that
> > > >> >> to enumerate the GPUs, we could create the gbm_device using the KMS
> > > >> >> device and pass that in to the EGLDisplay, with an additional attrib
> > > >> >> to pass in an EGLDevice handle to eglGetPlatformDisplay. This could
> > > >> >> possibly be better since it is more independent of DRM as the API, and
> > > >> >> also allows people to share device enumeration/selection code with
> > > >> >> other platforms (e.g. choosing between multiple GPUs when using a
> > > >> >> winsys like Wayland or X11).
> > > >> >>
> > > >> > I have not looked at this in detail yet, but I think it's just an
> > > >> > extension to the interface outlined by this series.
> > > >> >
> > > >> > If we require the KMS device to have a DRI2/Gallium driver it should be
> > > >> > easy to hook up the EGLDevice discovery for them.
> > > >> > Passing in a second device handle for the KMS device is then just the
> > > >> > EGL implementation calling gbm_device_set_kms_provider() on the render
> > > >> > GBM device, instead of the application doing it manually.
> > > >>
> > > >> It turns the API backwards a bit though ...
> > > >>
> > > >> Right now, what we require is that the GBM device passed in is the KMS
> > > >> device, not the GPU device; what you're suggesting is that we discover
> > > >> the GPU device and then add the KMS device.
> > > >>
> > > >> So, with your proposal:
> > > >> gbm_gpu = gbm_device_create("/dev/dri/renderD128");
> > > >> egl_dpy = eglGetDisplay(gbm_gpu);
> > > >> gbm_kms = gbm_device_create("/dev/dri/card0");
> > > >> gbm_device_set_kms_provider(gbm_gpu, gbm_kms);
> > > >>
> > > >> i.e. the device the user creates first is the GPU device.
> > > >>
> > > >> With EGLDevice, we would have:
> > > >> gbm_kms = gbm_device_create("/dev/dri/card0");
> > > >> egl_gpus = eglGetDevicesEXT();
> > > >> egl_dpy = eglGetPlatformDisplay(gbm_kms, { EGL_TARGET_DEVICE, egl_gpus[0] });
> > > >>
> > > >> So, the first/main device the user deals with is the KMS device - same
> > > >> as today. This makes sense, since GBM is the allocation API for KMS,
> > > >> and EGL should be the one dealing with the GPU ...
> > > >>
> > > > Right, my API design was from my view of GBM being the API to bootstrap
> > > > EGL rendering, but defining it as the KMS allocation API makes a lot
> > > > more sense, when you think about it.
> > > >
> > > >> Maybe it would make sense to reverse the API, so rather than creating
> > > >> a GBM device for the GPU and then linking that to the KMS device -
> > > >> requiring users to make different calls, e.g. gbm_bo_get_kms_bo(),
> > > >> which makes it harder to use and means we need to port current users -
> > > >> we create a GBM device for KMS and then link that to a GPU device.
> > > >> This would then mean that eglGetPlatformDisplay could do the linkage
> > > >> internally, and then existing users using gbm_bo_get_handle() etc
> > > >> would still work without needing any different codepaths.
> > > >
> > > > Yes, this will make the implementation inside GBM a bit more involved,
> > > > but it seems more natural this way around when thinking about hooking it
> > > > up to EGLDevice. I'll try it out and send an updated RFC after the
> > > > weekend.
> > > >
> > > While I'm more inclined to Daniel's suggestion, I wonder why people
> > > moved away from Thierry's approach - creating a composite/wrapped dri
> > > module ? Is there anything wrong with it - be that from technical or
> > > conceptual POV ?
> > > 
> > The wrapped driver takes away the ability of the application to decide
> > which GPUs to bind together - at least if you want to keep things
> > tightly coupled at that level.
> 
> That was actually the prime objective of the patches I posted back at
> the time. =)
> 
> > The point of the explicit application control is that we not only solve
> > the "SoCs have split render/scanout devices" issue, but gain an API for
> > compositors to work properly on PRIME laptop configurations with
> > render/render/scanout. We don't want any autodetection to happen there,
> > a compositor may well decide to use the Intel GPU as scanout only and do
> > all composition on the discreet GPU. Having a tightly coupled wrapped
> > driver for every device combination is not really where we want to go,
> > right?
> 
> To be honest, I don't think we have much of a choice. Most bare-metal
> applications don't make a distinction between render and scanout. They
> will simply assume that you can do both on the same device, because
> that's what their development machine happens to have. So unless we
> make a deliberate decision not to support most applications out there,
> what other options do we have?
> 
I would like to encourage applications to take explicit control. But you
are right, we should not impose this as a requirement.

So probably the right thing to do is have the GBM device on the scanout
device and use EGLDevice to discover render devices. If the application
explicitly passes in an EGLDevice to eglGetPlatformDisplay() we use that
one, otherwise the implementation will look for a suitable EGLDevice
itself.

> While I agree it's good to have an API to allow explicit control over
> association of render to scanout nodes, I think that we really want
> both. In addition to giving users the flexibility if they request it,
> I think we want to give them a sensible default if they don't care.
> 
> Especially on systems where there usually isn't a reason to care. Most
> modern SoCs would never want explicit control over the association
> because there usually is only a single render node and a single scanout
> node in the system.
> 
This only holds true as long as you don't plug a UDL device into your
SoC board.

Regards,
Lucas



More information about the mesa-dev mailing list