[Fwd: Re: CVS Update: xc (branch: trunk)]

Brian Paul brian.paul at tungstengraphics.com
Tue Jan 4 15:27:13 PST 2005


Roland Mainz wrote:
> Brian Paul wrote:
> [snip]
> 
>>>>The problem as such is with the driver not the application, and GL
>>>>applications in general do not do this.   By hiding the behaviour in
>>>>glxgears you are removing the incentive to fix the real problem.
>>>
>>>
>>>The original behaviour (in Xorg X11R6.7.x) was to call |XSync()| which
>>>was considered a ugly hack. Then I removed that for X11R6.8.x, resulting
>>>in the current problem. But if you look at my patch you'll see that you
>>>can get all the old behaviour:
>>>% glxgears -no_glfinish_after_frame # will restore the original
>>>X11R6.8.0 spam-to-death behaviour,
>>>% glxgears -no_glfinish_after_frame -xsync_after_frame # will restore
>>>the original X11R6.7.0 behaviour. Additionally you can ask for two other
>>>methods of sync'ing (see -sched_yield_after_frame and
>>>-xflush_after_frame) so everyone should be happy (well, I would prefer
>>>the event-driven model but it seems someone will have to update the GLX
>>>wire protocol level first... ;-().
>>>
>>
>>Keith is correct, glxgears is not broken.
> 
> 
> Umpf... the problem is that not everyone agrees with that
> (sort-of-deadlock where both the client and the server party claim it's
> the bug of the other party... ;-(). 
> 
> 
>>There are probably lots of
>>OpenGL apps out there that are written in the same manner.
> 
> 
> Which one ? AFAIK most real-world application don't exhibit this
> DOS-like behaviour as they are all waiting for X events at some point.

glxgears _does_ look for input events.  Note that the program can't 
call XNextEvent() because then the animation would stop.


>>Here's another scenario.  Suppose a GL application renders a few
>>thousand large triangles that happen to require a slow software
>>rendering path in the server.  The client can probably issue the GL
>>commands a lot faster than the server can render them.  How would you
>>propose handling that?
> 
> 
> Well, in theory glxgears would be event-driven where the GL engine then
> sends an event per buffer swap.

Sure, in theory, but we don't have any facility like that available to us.


> In that case glxgears then would wait
> for the event after having prepared the next frame (e.g. for(;;) {
> swap_buffers() ; render_background_buffer_content();
> wait_for_swap_event() ; swap_buffers(); }) - glxgears would then render
> with maximum speed but wouldn't spam the server to death.

But there is no such thing as 'wait_for_swap_event()'.

Again, I can come up with any number of ways an OpenGL application 
might send a stream of rendering commands which will overwhelm a slow 
server and cause a command back-log.

-Brian



More information about the xorg mailing list