Multibuffer Extension and Stereoscopic Rendering / Doublebuffering and Composite
Wolfgang Draxinger
wdraxinger.maillist at draxit.de
Tue Apr 12 02:52:22 PDT 2011
Hi,
as most of you are aware (or should be) the multibuffer extension had
been dropped from the latest release of X.Org. However multibuffer had
been the way to create stereoscopic drawables -- if one doesn't consider
OpenGL quadbuffer, which works independently from X multibuffer
(personally I consider this a bad choice on the design of GLX, doing
it independently - a clear connection between X (multi)buffers and
OpenGL front/back/auxiliaries would have been the better design IMHO;
also, at least on my system some visuals reported not being double
bufferd by glX are reported double buffered by DBE, so there's some
discrepancy there, too).
I consider myself as a high profile OpenGL programmer and because of
that, knowing the properties of this "tool", I think, that the current
trend to do everything related to graphics using OpenGL is, sorry I've
to say it that blunt, *stupid*. OpenGL is an excellent API, but to some
people it seems to be their hammer to unfit nails (for example it's
extremely tedious to implement HW accelerated crips vector graphics
using OpenGL, like you need it for font rendering; there are some
techniques like "vector textures", but they're quite heavy on feature
demands, essentially implementing a scanline curve rasterizer in the
fragment shader, with some preprocessing in a geometry shader).
There are serious applications that demand stereoscopic view modes,
that don't require 3D rasterizing capabilties at all (steroscopic video
players for example). As far as I understand, (some of) the
capabilities of multibuffer were considered to be included into
doublebuffer extension. How is the current state on this, especially on
the topic of stereoscopic rendering - or even far more advanced, multi
viewpoint rendering. For example the deprecation of multibuffer means
that a multiviewpoint^1 display tool I implemented some time ago stopped
working.
With the popularity of stereoscopic 3D, and some of the advanced things
you can do I see the need for:
- ability to create or associate at least two, but better multiple
drawables to a window.
- ability to tether/bind X drawables to corresponding objects in
auxiliary rendering APIs^2
- Extensions for accelerated video display should allow to bind to
multi-drawables, either by individual layers/components, or at whole.
Last but not least the DBE still lacks some important functionality
(which should be trivial to implement): Defining buffer swap behaviour,
i.e. the time when to swap the contents of a double buffer (sync it to
V-Sync, swap immediately, or sync to a SYNC extension counter).
There are some GLX extensions to this, but still core X doesn't provide
it. This kind of swap control should interface with the SYNC and
COMPOSITE extensions: Compositing in some way is some kind of
super-doublebuffer; however since clients usually don't signal a
"rendering done", one can see in-client tearing. Using a doublebuffered
visual imposes V-Sync granularity on compositor updates, percieved as
lag and stuttering by the user (unless the OpenGL driver has been
configured so: Don't V-Sync by default, but respect the applications
choice doing V-Sync which the Compositor may use). This could be
resolved by conventional use of SYNC counters for this purpose.
I already wrote a mail to this list about stereoscopy some time ago,
where I already tried to start a discussion on this topic. So here it
goes again, hopefully this time more people will answer and share their
thoughts.
Greetings,
Wolfgang
[1]: Instead of just two views the scene is photographed/rendered
from a large number of horizontally shifted eye positions; a tracker
measures the position of the viewer (technically it's a Wii-Mote and
a set of IR LEDs on the 3D glasses), showing the apropriate pictures,
making it possible to look "behind" objects upfront.
[2]: IMHO GLX needs some major overhaul, there
are a lot of misconceptions out there, even among people who should
know better, especially about the caveats of direct/indirect
rendering - recently I read an article that claimed that the
performance improvements of Vertex Buffer Objects are only
accessible through direct rendering and no gain is possible in
indirect mode; actually especially when direct rendering is not
available the use of VBOs, placing all data on the server side give
a HUGE performance boost, as the whole idea of VBOs is eliminating
any kind of bottleneck between the API consumer and the renderer).
More information about the xorg
mailing list