[Bug 94345] WebGL conformance2/reading/read-pixels-from-fbo-test.html fails

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Mon Mar 7 15:15:03 UTC 2016


https://bugs.freedesktop.org/show_bug.cgi?id=94345

--- Comment #7 from Yang Gu <yang.gu at intel.com> ---
I don't have too much background on gfx driver, so please correct me if my
understanding is wrong.
In following description, let's just take the case you mentioned as example.
This case is to map RGB5_A1 format (write) to RBGA format (read). The value of
B is 0.7, and the output of Intel driver is 173, while it's 181 for NVidia
driver.
First, I don't think a float number used as expection has problem, as it always
comes together with a tolerance value. In this case, any value in input may map
to output with 8 different values. For example, 0 in 5-bit format might map to
[0, 7] in 8 bit format, and we can simply write the expection+torlence as
3.5+3.5, which can cover the range well in a convenient way.
Then let's just try to get ideal result with math calculation. 0.7*31=21.7, so
the value of B should be 22 in 5-bit format. If we map 22 in 5-bit format to
8-bit format representation, we can get the range [176, 183], and the result
from NVidia driver falls into this range. To be precise, 21.7*8=173.6, and
NVidia's result is very close. 
Could you please give me more details why Intel driver reports 173 (Note that
21 maps to [168, 175])? 
Will 0.7 be converted by driver to 22 first, then be fed to hardware? Or driver
can simply feed hardware with 0.7?
Anyway, I think Intel driver doesn't match the ideal expectation well. Can we
do some changes?

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-3d-bugs/attachments/20160307/c7e93e66/attachment.html>


More information about the intel-3d-bugs mailing list