[Mesa-dev] Correct behaviour of glDrawPixels and glBitmap when texturing is enabled.
Matthew Dawson
matthew at mjdsystems.ca
Sun Mar 22 16:50:28 PDT 2015
Hi all,
I've been working fixing the piglit tests around glDrawPixels, and I'm not
sure what the correct behaviour of glDrawPixels is when any texturing units
are enabled (tested with the draw-pixel-with-texture piglit test). After
asking on irc, imirkin suggested I bring the issue here as it isn't clear how
to proceed. I've tried to review the specification, and I have an idea, but
I'm not sure if it is correct.
All of this is done against the OpenGL 3.0 specification (updated September
23, 2008), as that is what the old context reports for gallium swrast and
radeonsi. This only covers the fixed function pipeline.
According to figure 3.1, glDrawPixels (pixel rectangle rasterization in the
diagram) output fragments do go through the regular fixed function fragment
pipeline. The introduction for the rasterization chapter backs this up:
"The color values assigned to a fragment are initially determined by the
rasterization operations (sections 3.4 through 3.8) and modified by either the
execution of the texturing, color sum, and fog operations defined in sections
3.9, 3.10, and 3.11, or by a fragment shader as defined in section 3.12. The
final depth value is initially determined by the rasterization operations and
may be modified or replaced by a fragment shader. The results from rasterizing
a point, line, polygon, pixel rectangle or bitmap can be routed through a
fragment shader." (pg 112).
The question becomes how to implement this. There are two main problems:
1) How to input the pixel data into the texture stage
2) How to find the texture coordinates for sampling the textures.
For problem 1, I think this is easily resolved. For the texturing side, the
following paragraph says the fragment is passed through as is:
"If all texturing is disabled, a rasterized fragment is passed on unaltered to
the next stage of the GL (although its texture coordinates may be discarded).
Otherwise, a texture value is found according to the parameter values of the
currently bound texture image of the appropriate dimensionality using the
rules given in sections 3.9.6 through 3.9.8. This texture value is used along
with the incoming fragment in computing the texture function indicated by the
currently bound texture environment. The result of this function replaces the
incoming fragment’s primary R, G, B, and A values. These are the color values
passed to subsequent operations. Other data associated with the incoming
fragment remain unchanged, except that the texture coordinates may be
discarded." (pg 227)
To my understanding, this says as long as the output fragments in the
glDrawPixels look like fragments from the primitive rasterization process,
texturing occurs as normal. This leads into problem two, which is getting the
texture coordinates for sampling the textures (along with getting a primary
colour).
For this, section 3.7.4 discusses building the fragments' output data. The
relevant paragraph is:
"A fragment arising from a group consisting of color data takes on the color
index or color components of the group and the current raster position’s
associated depth value, while a fragment arising from a depth component takes
that component’s depth value and the current raster position’s associated
color index or color components. In both cases, the fog coordinate is taken
from the current raster position’s associated raster distance, the secondary
color is taken from the current raster position’s associated secondary color,
and texture coordinates are taken from the current raster position’s
associated texture coordinates. Groups arising from DrawPixels with a format
of DEPTH_STENCIL or STENCIL_INDEX are treated specially and are described in
section 4.3.1 ." (pg 163)
Which implies the colour is just the colour generated from the glDrawPixel's
operations. The texture coordinates come from the raster position, but are
not changed, and thus are *constant* over the entire rectangle rasterized in
the process.
This created the most confusion on IRC, since constant texture coordinates
don't seem to make much sense. As best as I can understand, this is what the
specification requires.
So, does it make sense to move forward with this interpretation of the
specification? Are they any errors in my understanding of the specification?
If everything looks good, I have a patch to the draw-pixel-with-texture piglit
test (with an update sitting on my computer) that tests this behaviour more
thoroughly that I'd like to get committed. I also have a patch against the
mesa state tracker to basically implement this behaviour (except for the
texture coordinates). These patches deal with ~80% of the behaviour, enough
to make piglit happy.
As this is old behaviour, and I assume relatively unused, is it worth it to
test and implement the other parts of the specification surrounding this
behaviour? For instance, glBitmap includes similar language in its section
that imply the same behaviour, but it is currently untested, and the
programmable pipeline probably has some issues as well.
Also, the raster position as used above is described in chapter 2.18. To
properly implement the above, I need to get access to the raster position's
state. However, I wasn't able to find it in my investigations of mesa. Is
this currently implemented/tested, and if so where would I have to go to get
access? If it isn't fully realized, is it worth it to test/implement this
section, since its removed in the core profile? Would a reduced
implementation to fix this test be OK (ignoring things like passing it through
the vertex shader)?
Thanks,
--
Matthew
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5584 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20150322/ff73a858/attachment.bin>
More information about the mesa-dev
mailing list