Solo Xgl..
Adam Jackson
ajax at nwnk.net
Tue Feb 22 09:55:03 PST 2005
On Tuesday 22 February 2005 11:48, Brian Paul wrote:
> Adam Jackson wrote:
> > I pounded out most of the rest of the API compat today. This is good
> > enough to run eglinfo and return mostly correct answers (caveat is always
> > "slow" for some reason), and of the 25ish egl* entrypoints only around
> > three are still stubs.
> >
> > Apply patch to a newish Mesa checkout, add egl.c to sources in
> > src/glx/x11/Makefile, build libGL.
>
> While you were working on a translation layer I was working on a
> general-purpose implementation of the EGL API.
Excellent! I was hoping our work wouldn't overlap.
I should probably describe where I see this going. All the egl* entrypoints
would call through a dispatch table (think glapi.c) that determines whether
to use the GLX translation or the native engine. The native engine would
fill the role that miniglx currently holds.
In practical terms, what this means is:
$ Xegl -drm /dev/dri/card0 :0 & # starts a server on the first video card
$ DISPLAY=:0 Xegl :1 & # runs a nested Xgl server under :0
would work the way you expect. (Obviously I'm handwaving away the fact that
the Xgl server doesn't support the GLX extension yet, and that there's no EGL
backend for glitz yet. The latter was actually my motivation for doing the
GLX translation, so we could have glitz ported before attempting to bring it
up native.)
So. Naive EGL applications would Just Work, whether or not there's a display
server already running. The EGL dispatch layer would be responsible for
checking some magic bit of per-card state that says whether there's currently
a live display server on the device, and route the EGL API accordingly.
This magic bit of per-card state would be exposed by some new EGL extension,
call it EGL_display_server. Non-naive applications like EGL, in the presence
of this extension, will register themselves as display servers for the given
device(s?) when they start up. This bit of state then gets handed down to
the DRM layer (or its moral equivalent for non-DRI drivers). (Plenty of
other magic can happen here, for example releasing this display server lock
on VT switch.) [1]
After which, the only hard part (sigh) is setting video modes. This may want
to be an EGL extension as well, and would have some precedent (eg
GLX_MESA_set_3dfx_mode). Of course we can implement this any which way we
like, it's just that exposing the API through EGL makes it easier for apps to
do this both across vendors and across platforms.
Hopefully this doesn't sound completely insane. Comments?
- ajax
1 - One question at this point would be why not make the first EGL app to
start on a device always take the lock? I could envision (probably embedded)
environments that want, essentially, cooperative windowing, where (for
example) each "window" maps to a hardware quad, textured through a pbuffer or
fbo, and the Z buffer is used to implement stacking order, with some message
passing between display apps so they don't fight. This is certainly not a
use case I care about, but other people might...
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg/attachments/20050222/8d7e4311/attachment.pgp>
More information about the xorg
mailing list