[Mesa-dev] [PATCH RFC 0/2] GBM API extension to support fusing KMS and render devices

Lucas Stach l.stach at pengutronix.de
Mon Mar 7 09:46:52 UTC 2016


Am Freitag, den 04.03.2016, 18:34 +0000 schrieb Emil Velikov:
> On 4 March 2016 at 17:38, Lucas Stach <l.stach at pengutronix.de> wrote:
> > Am Freitag, den 04.03.2016, 17:20 +0000 schrieb Daniel Stone:
> >> Hi,
> >>
> >> On 4 March 2016 at 16:08, Lucas Stach <l.stach at pengutronix.de> wrote:
> >> > Am Freitag, den 04.03.2016, 15:09 +0000 schrieb Daniel Stone:
> >> >> Thanks for taking this on, it looks really good! I just have the one
> >> >> question though - did you look at the EGLDevice extension? Using that
> >> >> to enumerate the GPUs, we could create the gbm_device using the KMS
> >> >> device and pass that in to the EGLDisplay, with an additional attrib
> >> >> to pass in an EGLDevice handle to eglGetPlatformDisplay. This could
> >> >> possibly be better since it is more independent of DRM as the API, and
> >> >> also allows people to share device enumeration/selection code with
> >> >> other platforms (e.g. choosing between multiple GPUs when using a
> >> >> winsys like Wayland or X11).
> >> >>
> >> > I have not looked at this in detail yet, but I think it's just an
> >> > extension to the interface outlined by this series.
> >> >
> >> > If we require the KMS device to have a DRI2/Gallium driver it should be
> >> > easy to hook up the EGLDevice discovery for them.
> >> > Passing in a second device handle for the KMS device is then just the
> >> > EGL implementation calling gbm_device_set_kms_provider() on the render
> >> > GBM device, instead of the application doing it manually.
> >>
> >> It turns the API backwards a bit though ...
> >>
> >> Right now, what we require is that the GBM device passed in is the KMS
> >> device, not the GPU device; what you're suggesting is that we discover
> >> the GPU device and then add the KMS device.
> >>
> >> So, with your proposal:
> >> gbm_gpu = gbm_device_create("/dev/dri/renderD128");
> >> egl_dpy = eglGetDisplay(gbm_gpu);
> >> gbm_kms = gbm_device_create("/dev/dri/card0");
> >> gbm_device_set_kms_provider(gbm_gpu, gbm_kms);
> >>
> >> i.e. the device the user creates first is the GPU device.
> >>
> >> With EGLDevice, we would have:
> >> gbm_kms = gbm_device_create("/dev/dri/card0");
> >> egl_gpus = eglGetDevicesEXT();
> >> egl_dpy = eglGetPlatformDisplay(gbm_kms, { EGL_TARGET_DEVICE, egl_gpus[0] });
> >>
> >> So, the first/main device the user deals with is the KMS device - same
> >> as today. This makes sense, since GBM is the allocation API for KMS,
> >> and EGL should be the one dealing with the GPU ...
> >>
> > Right, my API design was from my view of GBM being the API to bootstrap
> > EGL rendering, but defining it as the KMS allocation API makes a lot
> > more sense, when you think about it.
> >
> >> Maybe it would make sense to reverse the API, so rather than creating
> >> a GBM device for the GPU and then linking that to the KMS device -
> >> requiring users to make different calls, e.g. gbm_bo_get_kms_bo(),
> >> which makes it harder to use and means we need to port current users -
> >> we create a GBM device for KMS and then link that to a GPU device.
> >> This would then mean that eglGetPlatformDisplay could do the linkage
> >> internally, and then existing users using gbm_bo_get_handle() etc
> >> would still work without needing any different codepaths.
> >
> > Yes, this will make the implementation inside GBM a bit more involved,
> > but it seems more natural this way around when thinking about hooking it
> > up to EGLDevice. I'll try it out and send an updated RFC after the
> > weekend.
> >
> While I'm more inclined to Daniel's suggestion, I wonder why people
> moved away from Thierry's approach - creating a composite/wrapped dri
> module ? Is there anything wrong with it - be that from technical or
> conceptual POV ?
> 
The wrapped driver takes away the ability of the application to decide
which GPUs to bind together - at least if you want to keep things
tightly coupled at that level.

The point of the explicit application control is that we not only solve
the "SoCs have split render/scanout devices" issue, but gain an API for
compositors to work properly on PRIME laptop configurations with
render/render/scanout. We don't want any autodetection to happen there,
a compositor may well decide to use the Intel GPU as scanout only and do
all composition on the discreet GPU. Having a tightly coupled wrapped
driver for every device combination is not really where we want to go,
right?

Also the wrapped approach obscures resource usage from the backing GPU
drivers. We have a much better resource usage tracking on Etnaviv if we
get rid of the wrapping driver. This allows us to skip some of the
resource flush requests from the state tracker, when the resource has
not changed. Flushing a resource might mean to copy a 1080p (or possibly
even bigger) frame around, so having better control over resource usage
is quite a win.

> I believe it has a few advantages over the above two proposals - it
> allows greater flexibility as both drivers will be tightly coupled and
> can communicate directly, does not expand the internal/hidden ABI that
> we currently have between GBM and EGL, could (in theory) work with
> GLX.

As said above: if you you want to bind arbitrary combinations of drivers
together you need to move away from tight coupling to a shared interface
anyway. I don't see how having this interface inside a wrapped driver
instead of GBM help in any way, it's a MESA internal interface anyways.

We don't need any of this for GLX. Etnaviv is working fine with GLX on
both imx-drm and armada-drm, as the DDX does all the work when binding
devices together in that case.

Regards,
Lucas




More information about the mesa-dev mailing list