<div dir="ltr">Okay, that makes it easier.<div><br></div><div>Should this change be conditional based on the type of context created?</div><div><br></div><div>Courtney</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Thu, Dec 5, 2013 at 8:52 AM, Brian Paul <span dir="ltr"><<a href="mailto:brianp@vmware.com" target="_blank">brianp@vmware.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On 12/04/2013 03:46 PM, Courtney Goeltzenleuchter wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
It's come to my attention that Mesa's handling of GL_TEXTURE_BASE_LEVEL<br>
and GL_TEXTURE_MAX_LEVEL in glTexParameter and glGetTexParameter may be<br>
incorrect. The issue happens with the following sequence:<br>
<br>
glTexStorage2D(GL_TEXTURE_2D, 4, GL_RGBA8, 128, 128);<br>
<br>
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 5);<br>
<br>
glGetTexParameter(GL_TEXTURE_<u></u>2D, GL_TEXTURE_BASE_LEVEL, &n);<br>
<br>
The key question is: What is the value of n?<br>
<br>
Right now, the Mesa driver will clamp the glTexParameter call to the<br>
range 0 .. 3 (as specified by the TexStorage call) and n = 3 after the<br>
GetTexParameter call. However, the value returned on the Intel Windows<br>
driver and NVIDIA's Linux driver return 5. This has apparently been<br>
discussed among Kronos members in bug: 9342<br>
(<a href="https://cvs.khronos.org/bugzilla/show_bug.cgi?id=9342" target="_blank">https://cvs.khronos.org/<u></u>bugzilla/show_bug.cgi?id=9342</a><br></div>
<<a href="https://urldefense.proofpoint.com/v1/url?u=https://cvs.khronos.org/bugzilla/show_bug.cgi?id%3D9342&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=lGQMzzTgII0I7jefp2FHq7WtZ%2BTLs8wadB%2BiIj9xpBY%3D%0A&m=SrPAWBW251dxCErQJNhB0m93E9Vb62KGHxK3yiRBRuU%3D%0A&s=a0c086b7a31a804e3e786118fd1771b53c85717a4cf0c8dea41b60e5acd0406b" target="_blank">https://urldefense.<u></u>proofpoint.com/v1/url?u=https:<u></u>//cvs.khronos.org/bugzilla/<u></u>show_bug.cgi?id%3D9342&k=<u></u>oIvRg1%2BdGAgOoM1BIlLLqw%3D%<u></u>3D%0A&r=<u></u>lGQMzzTgII0I7jefp2FHq7WtZ%<u></u>2BTLs8wadB%2BiIj9xpBY%3D%0A&m=<u></u>SrPAWBW251dxCErQJNhB0m93E9Vb62<u></u>KGHxK3yiRBRuU%3D%0A&s=<u></u>a0c086b7a31a804e3e786118fd1771<u></u>b53c85717a4cf0c8dea41b60e5acd0<u></u>406b</a>>)<div class="im">
<br>
which I don't have visibility of.<br>
<br>
To match that behavior the texture object will likely need two BaseLevel<br>
and MaxLevel attributes. One that's clamped and used locally and the<br>
other that simply holds the set value as given by the application in the<br>
glTexParameter call.<br>
<br>
Thoughts?<br>
</div></blockquote>
<br>
>From reading the bug report, it sounds like the ARB decided that clamping should be done when the texture is used, not when glTexParameter is called. In the GL 4.3 spec I don't see any language about clamping in glTexParameter either.<br>
<br>
We should be doing the use-time clamping already. So I think we just have to remove the clamping step in glTexParameter.<span class="HOEnZb"><font color="#888888"><br>
<br>
-Brian<br>
<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">Courtney Goeltzenleuchter<br><div>LunarG</div><div><img src="http://media.lunarg.com/wp-content/themes/LunarG/images/logo.png" width="96" height="65"><br>
</div></div>
</div>