Solo Xgl..
Brian Paul
brian.paul at tungstengraphics.com
Tue Feb 22 10:16:47 PST 2005
Adam Jackson wrote:
> On Tuesday 22 February 2005 11:48, Brian Paul wrote:
>
>>Adam Jackson wrote:
>>
>>>I pounded out most of the rest of the API compat today. This is good
>>>enough to run eglinfo and return mostly correct answers (caveat is always
>>>"slow" for some reason), and of the 25ish egl* entrypoints only around
>>>three are still stubs.
>>>
>>>Apply patch to a newish Mesa checkout, add egl.c to sources in
>>>src/glx/x11/Makefile, build libGL.
>>
>>While you were working on a translation layer I was working on a
>>general-purpose implementation of the EGL API.
>
>
> Excellent! I was hoping our work wouldn't overlap.
>
> I should probably describe where I see this going. All the egl* entrypoints
> would call through a dispatch table (think glapi.c) that determines whether
> to use the GLX translation or the native engine. The native engine would
> fill the role that miniglx currently holds.
My code already does that. The EGL->miniglx translation would just be
another "driver". I always thought it would be nice if the indirect
rendering code for GLX were just another loadable driver. The EGL
code would support that idea.
> In practical terms, what this means is:
>
> $ Xegl -drm /dev/dri/card0 :0 & # starts a server on the first video card
> $ DISPLAY=:0 Xegl :1 & # runs a nested Xgl server under :0
>
> would work the way you expect. (Obviously I'm handwaving away the fact that
> the Xgl server doesn't support the GLX extension yet, and that there's no EGL
> backend for glitz yet. The latter was actually my motivation for doing the
> GLX translation, so we could have glitz ported before attempting to bring it
> up native.)
>
> So. Naive EGL applications would Just Work, whether or not there's a display
> server already running. The EGL dispatch layer would be responsible for
> checking some magic bit of per-card state that says whether there's currently
> a live display server on the device, and route the EGL API accordingly.
Right.
My code right now does something clunky: the parameter passed to
eglGetDisplay() is interpreted as a string, rather than a Display *.
The value of the string determines which driver to load, either by
name or screen number like ":0". If the code determined that the
value isn't a string, treat it as a real X Display *. Thereafter,
each EGLDisplay handle is associated with a particular driver
instance. This is experimental.
> This magic bit of per-card state would be exposed by some new EGL extension,
> call it EGL_display_server. Non-naive applications like EGL, in the presence
> of this extension, will register themselves as display servers for the given
> device(s?) when they start up. This bit of state then gets handed down to
> the DRM layer (or its moral equivalent for non-DRI drivers). (Plenty of
> other magic can happen here, for example releasing this display server lock
> on VT switch.) [1]
>
> After which, the only hard part (sigh) is setting video modes. This may want
> to be an EGL extension as well, and would have some precedent (eg
> GLX_MESA_set_3dfx_mode). Of course we can implement this any which way we
> like, it's just that exposing the API through EGL makes it easier for apps to
> do this both across vendors and across platforms.
The eglscreen.c file has some ideas for a few functions for setting
screen size/refresh/etc. This is totally experimental too.
> Hopefully this doesn't sound completely insane. Comments?
>
> - ajax
>
> 1 - One question at this point would be why not make the first EGL app to
> start on a device always take the lock? I could envision (probably embedded)
> environments that want, essentially, cooperative windowing, where (for
> example) each "window" maps to a hardware quad, textured through a pbuffer or
> fbo, and the Z buffer is used to implement stacking order, with some message
> passing between display apps so they don't fight. This is certainly not a
> use case I care about, but other people might...
Yeah, if you think about things for a while you eventually find that
the EGL API/interface might be used at two different levels: below the
X server and as a user-accessible API.
-Brian
More information about the xorg
mailing list