[Mesa-dev] RFC - libglvnd and GLXVND vendor enumeration to facilitate GLX multi-vendor PRIME GPU offload

Andy Ritger aritger at nvidia.com
Fri Feb 8 21:33:45 UTC 2019

On Fri, Feb 08, 2019 at 03:01:33PM -0500, Adam Jackson wrote:
> On Fri, 2019-02-08 at 10:19 -0800, Andy Ritger wrote:
> > (1) If configured for PRIME GPU offloading (environment variable or
> >     application profile), client-side libglvnd could load the possible
> >     libGLX_${vendor}.so libraries it finds, and call into each to
> >     find which vendor (and possibly which GPU) matches the specified
> >     string. Once a vendor is selected, the vendor library could optionally
> >     tell the X server which GLX vendor to use server-side for this
> >     client connection.
> I'm not a huge fan of the "dlopen everything" approach, if it can be
> avoided.

Yes, I agree.

> I think I'd rather have a new enum for GLXQueryServerString
> that elaborates on GLX_VENDOR_NAMES_EXT (perhaps GLX_VENDOR_MAP_EXT),
> with the returned string a space-delimited list of <profile>:<vendor>.
> libGL could accept either a profile or a vendor name in the environment
> variable, and the profile can be either semantic like
> performance/battery, or a hardware selector, or whatever else.
> This would probably be a layered extension, call it GLX_EXT_libglvnd2,
> which you'd check for in the (already per-screen) server extension
> string before trying to actually use.

That all sounds reasonable to me.

> > At the other extreme, the server could do nearly all the work of
> > generating the possible __GLX_VENDOR_LIBRARY_NAME strings (with the
> > practical downside of each server-side GLX vendor needing to enumerate
> > the GPUs it can drive, in order to generate the hardware-specific
> > identifiers).
> I don't think this downside is much of a burden? If you're registering
> a provider other than Xorg's you're already doing it from the DDX
> driver (I think? Are y'all doing that from your libglx instead?), and
> when that initializes it already knows which device it's driving.

Right.  It will be easy enough for the NVIDIA X driver + NVIDIA server-side GLX.

Kyle and I were chatting about this, and we weren't sure whether people
would object to doing that for the Xorg GLX provider: to create the
hardware names, Xorg's GLX would need to enumerate all the DRM devices
and list them all as possible <profile>:<vendor> pairs for the Xorg
GLX-driven screens.  But, now that I look at it more closely, it looks
like drmGetDevices2() would work well for that.

So, if you're not concerned with that burden, I'm not.  I'll try coding
up the Xorg GLX part of things and see how it falls into place.

Two follow-up questions:

(1) Even when direct-rendering, NVIDIA's OpenGL/GLX implementation sends
    GLX protocol (MakeCurrent, etc).  So, we'd like something client-side
    to be able to request that server-side GLXVND route GLX protocol for the
    calling client connection to a specific vendor (on a per-screen basis).
    Do you think it would be reasonable for GLX_EXT_libglvnd2 to define a
    new protocol request, that client-side libglvnd uses, and sends either
    the profile or vendor name from the selected '<profile>:<vendor>'?

(2) Who should decide which vendor/gpu gets the semantic name
    "performance" or "battery"?  They are relative, so I don't know that
    vendors can decide for themselves in isolation.  It kind of feels
    like it should be GLXVND's job, but I don't know that it has enough
    context to infer.  I'm curious if anyone else has ideas.

- Andy

> - ajax

More information about the xorg-devel mailing list