[Mesa-dev] [PATCH] mesa: add GL_UNSIGNED_INT_24_8 to _mesa_pack_depth_span

Tapani Pälli tapani.palli at intel.com
Wed Oct 7 23:54:32 PDT 2015



On 10/01/2015 10:12 PM, Ian Romanick wrote:
> On 10/01/2015 09:50 AM, Tapani Pälli wrote:
>> On 10/01/2015 03:17 PM, Iago Toral wrote:
>>> On Thu, 2015-10-01 at 08:28 +0300, Tapani Pälli wrote:
>>>> Patch adds missing type (used with NV_read_depth) so that it gets
>
>                                         NV_read_depth_stencil?
>
>>>> handled correctly. Also add type to _mesa_problem output to aid
>>>> debugging.
>>>>
>>>> Signed-off-by: Tapani Pälli <tapani.palli at intel.com>
>>>> ---
>>>>    src/mesa/main/pack.c | 4 +++-
>>>>    1 file changed, 3 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/src/mesa/main/pack.c b/src/mesa/main/pack.c
>>>> index 7147fd6..54a0c42 100644
>>>> --- a/src/mesa/main/pack.c
>>>> +++ b/src/mesa/main/pack.c
>>>> @@ -1074,6 +1074,7 @@ _mesa_pack_depth_span( struct gl_context *ctx,
>>>> GLuint n, GLvoid *dest,
>>>>             }
>>>>          }
>>>>          break;
>>>> +   case GL_UNSIGNED_INT_24_8:
>>> Is it okay to store 32-bit integers in this case? that's what the code
>>> below does. The spec says that the 8 stencil bits are undefined, but
>>> don't we need to convert the depth value to a 24-bit integer scale?
>>> (i.e. make 1.0 translate to 2^24-1 not 2^32-1).
>
> I was going to make the same comment.
>
>> Oh, I thought stencil was included, I'll check if there's any existing
>> app/test that would actually use this.
>
> The incoming depthSpan is a float*, so there's no stencil there.  I
> think we'll need to cobble together a piglit test for this case.  Can

FYI I'm now working on a Piglit test for NV_read_depth but I sent a 
patch already because of having some pressure to get that error out of 
CTS run. Meanwhile patch can be tested with following CTS test:

ES3-CTS.gtf.GL3Tests.packed_pixels.packed_pixels

I've also tested that I get same values back as Nvidia with my 
unfinished test, it now simply runs a loop rendering rectangle in depths 
from -1.0 to 1.0, reads back  and compares to expected result. I'm 
working on to iterate over some more formats.

> this only be hit of NV_read_depth_stencil is exported?  I'm just
> wondering because Mesa doesn't support that extension.  How is this even
> being hit?
>
>>> Iago
>>>
>>>>       case GL_UNSIGNED_INT:
>>>>          {
>>>>             GLuint *dst = (GLuint *) dest;
>>>> @@ -1124,7 +1125,8 @@ _mesa_pack_depth_span( struct gl_context *ctx,
>>>> GLuint n, GLvoid *dest,
>>>>          }
>>>>          break;
>>>>       default:
>>>> -      _mesa_problem(ctx, "bad type in _mesa_pack_depth_span");
>>>> +      _mesa_problem(ctx, "bad type in _mesa_pack_depth_span (%s)",
>>>> +                    _mesa_enum_to_string(dstType));
>>>>       }
>>>>         free(depthCopy);
>
> I definitely approve of this part.  If you want to split that into its
> own patch, that's
>
> Reviewed-by: Ian Romanick <ian.d.romanick at intel.com>
>
>>
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>


More information about the mesa-dev mailing list