[Fwd: Re: CVS Update: xc (branch: trunk)]

Roland Mainz roland.mainz at nrubsig.org
Tue Jan 4 15:01:28 PST 2005

Keith Whitwell wrote:
> >>>Log message:
> >>>  2005-01-04 Roland Mainz <roland.mainz at nrubsig.org>
> >>>    * xc/programs/glxgears/glxgears.c
> >>>    Bugzilla #2220 (https://bugs.freedesktop.org/show_bug.cgi?id=2220)
> >>>    attachment #1630 (https://bugs.freedesktop.org/attachment.cgi?id=1630):
> >>>    Make glxgears a better GL client via calling |glFinish()| between frame
> >>>    swaps to avoid that the GL instruction queue gets spammed, sometimes
> >>>    even killing all interactive usage of the Xserver.
> >>
> >>Please don't do this - this is not "better" or reccomended GL usage.
> >
> > Uhm... why ? Multiple GL experts claimed that X11R6.8.0 glxgears is
> > "broken" (based in the internal feedback from
> > https://bugs.freedesktop.org/show_bug.cgi?id=1793) and suggested either
> > an event-driven application model or to call at least |glFinish()|. The
> > first option wasn't possible (which would be preferable here as both
> > client and Xserver can run "decoupled" but still avoiding that the
> > client can send rendering instructions faster then the server can handle
> > them) as it seems to require GLX 1.3 so I used |glFinish()|.
> Every GL driver can potentially exhibit this behaviour, the fact that
> none do is because it is such an easy condition to trigger and that even
> basic usage of a driver brings it to light.  If glxgears causes your
> driver to become unresponsive, think what quake will do to it.

Well, at least QuakeII doesn't cause this problem (I didn't test GLquake
yet) ...

> The trouble with your fix is that it covers up a driver bug in one
> application only, namely glxgears.  It does so by doing something that
> is quite unusual for GL applications and isn't recommended or normal
> coding practice.
> The real problem is that the driver does nothing to throttle the rate it
> accepts GL commands in relation to the speed of the hardware.
> Presumably there is a very large buffer somewhere which is being filled
> up with rendering commands - the simplest way to reduce the problem
> would be to find and reduce the size of that buffer.  It may be that the
> items being buffered are GLX protocol requests, or drawing requests
> internal to the X server or both, or something else entirely.
> The approach taken in the accelerated drivers is to count the number of
> swapbuffers commands which have been issued vs. the number which have
> been processed and ensure that number remains small.

How do you deal with the problem that even the libX11 buffer may be "too
large" in this case ? You can't really reduce that buffer size so a
different approach is needed here...

> >>The problem as such is with the driver not the application, and GL
> >>applications in general do not do this.   By hiding the behaviour in
> >>glxgears you are removing the incentive to fix the real problem.
> >
> > The original behaviour (in Xorg X11R6.7.x) was to call |XSync()| which
> > was considered a ugly hack.
> I agree with that assessment...
>  > Then I removed that for X11R6.8.x, resulting
> > in the current problem. But if you look at my patch you'll see that you
> > can get all the old behaviour:
> > % glxgears -no_glfinish_after_frame # will restore the original
> > X11R6.8.0 spam-to-death behaviour,
> > % glxgears -no_glfinish_after_frame -xsync_after_frame # will restore
> > the original X11R6.7.0 behaviour. Additionally you can ask for two other
> > methods of sync'ing (see -sched_yield_after_frame and
> > -xflush_after_frame) so everyone should be happy (well, I would prefer
> > the event-driven model but it seems someone will have to update the GLX
> > wire protocol level first... ;-().
> Unfortunately the problem remains for all the 100% minus one GLX
> applications out there in the world - by modifying glxgears you have 1)
> altered the behaviour of an application people use as a known quantity
> debugging GL installations and 2) hidden only trivially a real problem
> with indirect rendering in Xorg.
> Adding a glFinish() after glXSwapBuffers() is as bad a hack as an
> XSync() in the same spot, and for much the same reasons.

OK, what else should I do here ? There have been several complaints that
glxgears current (X11R6.8.x) behaviour is broken (to a point where it's
inclusion into a distribution has been rejected) so I fix is needed. As
the problem affects Solaris, AIX and Xorg/MESA a fix within glxgears
sounded the best way for me (as almost every GL implementation I have
currently access to is then "broken" by ajax's definition (on IRC)).

> This is like saying "oh, there's a bug in Xorg patterned fills, we'll
> just change xtest so that it doesn't exercise that path".  It doesn't
> work because real applications that people use will also trigger the
> behaviour.

Well, originally the problem didn't exist until I changed that after
X11R6.7.0 via removing |XSync()|. So the best solution may be to restore
the original Xfree86 and Xorg X11R6.7.0 behaviour via putting XSync()
back - what do you think ?



  __ .  . __
 (o.\ \/ /.o) roland.mainz at nrubsig.org
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 7950090
 (;O/ \/ \O;)

More information about the xorg mailing list