[Bug 89156] r300g: GL_COMPRESSED_RED_RGTC1 / ATI1N support broken
bugzilla-daemon at freedesktop.org
bugzilla-daemon at freedesktop.org
Mon Mar 2 05:34:32 PST 2015
https://bugs.freedesktop.org/show_bug.cgi?id=89156
--- Comment #8 from Stefan Dösinger <stefandoesinger at gmx.at> ---
I ran some more tests, it seems that the format is operating at 3 bits
precision. I can produce 8 different output colors. Otherwise it seems to
follow the spec, so I don't think we're accidentally feeding the data into an
R3G3B2 texture.
On Windows the format operates at the expected precision - I can get any output
data from 0x00 to 0xff.
I skimmed the GPU docs for clues what may cause this behavior but could not
find anything. The things I checked were enabling / disabling filtering, make
sure texture address handling follows conditional NP2 texture rules, disabling
alpha blending. For the sake of testing I also tried disabling FBOs and all our
sRGB code.
I'm also quite sure that all 8 bits of red0 and red1 input arrive on the GPU. I
tested that by setting the code of each texel to 7 and then testing red0=1,
red1=0 and red0=0 and red1=1. In the former case this gives the result 0
(interpolation between red0 and red1), in the latter case this gives 0xfc
(MAXRED). The same works for the input values 0x80 and 0x7f.
I tested interpolation codes (e.g. red0=0x2, red1=0xa2, code 2 for each texel,
then try to reduce red0 or red1 by 1), and it seems that the input into the
interpolation is OK, but either the interpolation happens at a lower precision
or the output is clamped afterwards.
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20150302/ab7e0332/attachment.html>
More information about the dri-devel
mailing list