libdrm issues blocking accelerated indirect GL

Roland Mainz roland.mainz at nrubsig.org
Fri Dec 31 10:54:30 PST 2004


Adam Jackson wrote:
> > I have a seventh option for you. Feel free to flame me if it sounds
> > stupid ...
> >
> > - Option 7: Run the GLX server as a separate process forked by the
> > Xserver. This way you get rid of the problem with the same library
> > linked into the same process multiple times.
> >
> > Pros: No existing ABIs need to be changed. It would also improve the
> > responsiveness of the Xserver when expensive indirect rendering
> > operations are performed (for instance software fallbacks).
> 
> This is indeed a major problem.  Indirect glxgears is extremely laggy at
> processing user input (and worse in 6.8 than it used to be...)

The current "glxgears" implementation is braindead as it spamms the
Xserver with tons of rendering requests to rotate the gear teeth without
waiting for any reponses. Somewhere in my queue is a patch which solves
that using the original XSync() way (but offers a switch to turn that
on/off on demand).
Unfortunately it's not the whole story as the GL implementation in the
Xserver could prevent the problem via putting itself at the end of the
schduler queue after each frame swap. Some GL server implementations do
that and get a _far_ better responsiveness with the current 6.8.x
glxgears implementation than what the Xorg server currently allows.

> > Cons: GLX protocol goes through the same channel as X protocol. So doing
> > GLX in a separate process would involve forwarding GLX protocol from the
> > Xserver to the real GLX server process in some way. Not sure how much
> > overhead in time and code complexity that would introduce.
> 
> I'm not sure either.  I'd take it as a given that people using indirect
> rendering are willing to sacrifice some performance, but they shouldn't be
> made to suffer more than necessary.  We do have something resembling this in
> the form of DMX's glxProxy, but I don't know how much work would be required
> to split that out into a helper process.  I assume it's doable though.

Take a look at Solaris - their GLX implementation uses a seperate
per-head thread model to avoid the problem. And it also allows _MUCH_
better performace on multi-CPU  systems (which will become more and more
common in the future - like the upcoming dual-core x86 machines or Sun's
octa-core Niaraga/Rock machines) and multithreaded/hyperthreaded CPUs
(like Intel's Hyperthreading in the P4, the coming Itanium systems and
Sun's Niagara (which are 8 CPUs with four threads per CPU, all on one
die)).

----

Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.mainz at nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)



More information about the xorg mailing list