[Xorg] X on OpenGL
David Reveman
c99drn at cs.umu.se
Sat Jul 10 00:16:32 PDT 2004
On Fri, 2004-07-09 at 23:20 -0400, Adam Jackson wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Friday 09 July 2004 20:49, Andy Sy wrote:
> > Adam Jackson wrote:
> > >>The one big problem I see with the OpenGL API is that it does not give
> > >>you any direct pixel-level access to the frame buffer and wouldn't it
> > >>be extremely kludgy to build a windowing system without such?
> > >
> > > man glReadPixels.
> >
> > Right... glReadPixels, glCopyPixels and glDrawPixels... however
> > everyone says that implementations of these are dog-slow (abuse
> > of the hardware) and you're better off writing to a texture (which
> > is kludgy in many contexts)...
> >
> > <snip>
> >
> > Why is it that OpenGL drivers seem to universally have this behaviour?
>
> I suspect that, in order to get consistent results, the gl*Pixels calls are
> implicitly preceeded by a glFinish call, which would impose a synchronization
> penalty. So the Pixel function itself could be fast in terms of bandwidth
> but not in terms of latency, and calling it in a loop from 1 to 2000 would
> hurt. This might not be true for glDrawPixels since the results would be
> drawn in sequence with other GL commands, but Read and Copy might need to
> wait for drawing to finish before reading. (If this is a real problem it
> might be possible to design an async Read API that allows the app to request
> the pixels early and only block on the results when it really needs them.)
>
> Several of the DRI drivers have DMA-accelerated gl*Pixels functions, with
> something on the order of 1GB/sec bandwidth not being uncommon. Perhaps
> that's not fast enough. At any rate the DRI drivers know where the
> framebuffer is, and could be extended to enable direct access to the
> application.
>
> DRI is quite flexible, things like DGA and XV could be implemented in terms of
> the DRI framework. DRI just happens to get used to implement fast GL
> drivers. It might be more accurate to say that the goal is to make the X
> server a DRI client rather than an OpenGL app.
I've added support for async DMA-accelerated pixel transfers to glitz,
right now it will only make a difference for hardware/drivers that
support the GL_EXT_pixel_buffer_object extension [1]. However, it
provides a very nice interface for efficient pixel transfers and the
GL_EXT_pixel_buffer_object extension also allow us to specify the
purpose of a memory buffer and allocates appropriate memory for us. e.g.
write-combined uncached AGP memory for draw and cached memory for read.
So far, I haven't been able to do much testing but the latest nvidia
drivers support this extension and a simple glitz video-out module for
mplayer showed a 100% performance increase in the video-out module on a
4 year old motherboard with a geforce2mx card. Performance gain is
probably even better on newer systems, I think nvidia is now actually
using the 3D engine to do Xv PutImage requests for Xv adapters on
geforce4 and geforceFX cards in their latest driver.
None of this code has been committed glitz CVS yet, but I'll try to get
it done sometime during this weekend.
It would be interesting to see how hard it would be to add support for
the GL_EXT_pixel_buffer_object extension to some of DRI drivers, as you
said "DRI is quite flexible".
I think mesa-software support GL_EXT_pixel_buffer_object.
[1]
http://oss.sgi.com/projects/ogl-sample/registry/EXT/pixel_buffer_object.txt
-David
More information about the xorg
mailing list