[Openicc] Xorg low level buffers - colour conversion
Hal V. Engel
hvengel at astound.net
Sat Mar 8 16:22:26 PST 2008
On Saturday 08 March 2008 07:22:52 Tomas Carnecky wrote:
> Gerhard Fuernkranz wrote:
> > Tomas Carnecky wrote:
> >> The fragment shader (which is what is of interest here) is executed
> >> for every fragment (~pixel) separately. A simple (no-op) fragment
> >> shader looks like this:
> >>
> >> void main()
> >> {
> >> gl_FragColor = gl_Color;
> >> }
> >
> > ... where I guess that gl_FragColor and gl_Color are both 3-element
> > vectors (R, G, B)?
> >
> > Is gl_Color fixed to be a 3-element vector, or can gl_Color also be a
> > vector of different length (e.g. 4 for CMYK), which can then be
> > converted by the shader to RGB (while the image is sent directly in e.g.
> > CMYK color space to OpenGL)?
>
> Both are a four component vector, how you interpret the components is up
> to you, you could see it as CMYK for example. Usually it's RGBA.
>
> > One more question, if one does not want to use the complete rendering
> > pipeline, but if one wants to do only the color transformation on the
> > GPU (i.e. send image data to GPU, do the transformation, and copy the
> > data back), is it still possible to do this via OpenGL and
> > GPU-independent shader programs? Or is proprietary GPU programming
> > necessary then?
>
> NVidia released CUDA, a C-like programing language for access to the GPU
> (without requiring OpenGL or X11).
GLEW, http://glew.sourceforge.net/, is currently being used in the panorama
blending program enblend. Taking a quick look at the GLEW web page it
appears to be a library to assist programmers in optimizing their OpenGL
code. On my machine moving from a version of enblend without GPU support to
one with GPU support (using GLEW & OpenGL) cut enblend times by about 90%.
The first time I ran this I thought that something had not worked because
there was no way it could be that fast. I have a fairly high end but
slightly older GPU (NVidia 7950 GT) and I suspect that the amount of speed up
is dependant on how many pipe lines the GPU has (mine has 24) as well as how
fast these are. A state of the art GPU would likely be significantly faster
still.
My understanding is that CUDA is for using the GPU to do general purpose work
such as math and science work. Besides CUDA is not GPU agnostic and what we
are talking about doing will be on systems with X11 and OpenGL.
> However to do what you want you can
> just as well use OpenGL directly. In X11 you can create off-screen
> buffers and operate on those, instead of on visible windows. Apart from
> the windowing system dependent parts (X11/windows/mac specific OpenGL
> setup) there is nothing 'proprietary' involved.
>
> Don't see OpenGL and the GPUs as a 'image rendering' API/chip. It's much
> more versatile nowadays. Don't look at textures as images but 'array of
> values', don't look at the GPU as a image rendering chip but as a
> 'floating point processor'.
This is exactly why AMD purchased ATI and why their Fusion architecture will
be a combination of somewhat conventional a multi core CPU with a GPU. They
think that they can get significant gains for both general purpose and
graphics functionality with this combination integrated onto a single chip.
For example a smart compiler would know when a set of general purpose
operations could be vectorized and use the GPU part of the chip and the
graphics driver could split out some of it's functionality to the CPU part of
the chip where this would result in higher graphics through put.
>
> Also take a look at http://www.gpgpu.org/ - General-Purpose Computation
> Using Graphics Hardware.
>
> tom
More information about the openicc
mailing list