[Piglit] [PATCH] Add test case to verify large textures are handled correctly in mesa
Brian Paul
brianp at vmware.com
Mon Feb 27 14:37:24 PST 2012
On 02/27/2012 03:02 PM, Anuj Phogat wrote:
> On Mon, Feb 27, 2012 at 10:54 AM, Anuj Phogat <anuj.phogat at gmail.com
> <mailto:anuj.phogat at gmail.com>> wrote:
>
> On Mon, Feb 27, 2012 at 7:44 AM, Brian Paul <brianp at vmware.com
> <mailto:brianp at vmware.com>> wrote:
>
> On 02/23/2012 08:27 PM, Anuj Phogat wrote:
>
> Intel driver throws GL_OUT_OF_MEMORY and assertion failure
> / segmentation
> fault with large textures.
>
> Reproduces the errors reported in:
> https://bugs.freedesktop.org/__show_bug.cgi?id=44970
> <https://bugs.freedesktop.org/show_bug.cgi?id=44970>
> https://bugs.freedesktop.org/__show_bug.cgi?id=46303
> <https://bugs.freedesktop.org/show_bug.cgi?id=46303>
>
> Signed-off-by: Anuj Phogat<anuj.phogat at gmail.com
> <mailto:anuj.phogat at gmail.com>>
> ---
> V3: Added the test using proxy textures, deleted redundant
> code,
> data array allocation based on size returned
>
> ToDo: getMaxTarget, getProxyTarget and isValidTexSize
> functions can be added
> as piglit utility functions in piglit-util-gl.c
> I will do it in a separate patch.
>
> tests/all.tests | 1 +
> tests/bugs/CMakeLists.gl.txt | 1 +
> tests/bugs/large-textures.c | 352
> ++++++++++++++++++++++++++++++__++++++++++++
> 3 files changed, 354 insertions(+), 0 deletions(-)
> create mode 100644 tests/bugs/large-textures.c
>
>
>
> diff --git a/tests/bugs/large-textures.c
> b/tests/bugs/large-textures.c
> new file mode 100644
> index 0000000..c43f51e
> --- /dev/null
> +++ b/tests/bugs/large-textures.c
> @@ -0,0 +1,352 @@
>
>
> The test should be moved to test/texturing/. I think we
> agreed to move away from the tests/bugs/ directory.
>
>
>
> [...]
>
>
> +void piglit_init(int argc, char **argv)
> +{
> + GLuint tex;
> + int i, j;
> +
> + glMatrixMode(GL_PROJECTION);
> + glPushMatrix();
> + glLoadIdentity();
> + glOrtho(0, piglit_width, 0, piglit_height, -1, 1);
> + glMatrixMode(GL_MODELVIEW);
> + glPushMatrix();
> + glLoadIdentity();
> + glClearColor(0.2, 0.2, 0.2, 1.0);
> + glClear(GL_COLOR_BUFFER_BIT);
> +
> + for ( i = 0; i< ARRAY_SIZE(target); i++) {
> +
> + glGenTextures(1,&tex);
> + glBindTexture(target[i], tex);
> + glTexParameteri(target[i],
> GL_TEXTURE_MIN_FILTER, GL_NEAREST);
> + glTexParameteri(target[i],
> GL_TEXTURE_MAG_FILTER, GL_NEAREST);
> +
> + for (j = 0; j<
> ARRAY_SIZE(internalformat); j++) {
> +
> + if (internalformat[j] == GL_RGBA16F ||
> + internalformat[j] == GL_RGBA32F)
> +
> piglit_require_extension("GL___ARB_texture_float");
>
>
> I think you should simply skip the float formats if
> GL_ARB_texture_float is not supported (don't exit with SKIP
> result).
>
> Yes. That's the right thing i should do.
>
>
> I ran the test with NVIDIA's driver. Here's what it printed:
>
> GL_PROXY_TEXTURE_1D, Internal Format = GL_RGBA8, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_1D, Internal Format = GL_RGBA16, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_1D, Internal Format = GL_RGBA32F, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_2D, Internal Format = GL_RGBA8, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_2D, Internal Format = GL_RGBA16, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_2D, Internal Format = GL_RGBA32F, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_CUBE_MAP, Internal Format = GL_RGBA8, Largest
> Texture Size = 16384
> GL_PROXY_TEXTURE_CUBE_MAP, Internal Format = GL_RGBA16,
> Largest Texture Size = 16384
> GL_PROXY_TEXTURE_CUBE_MAP, Internal Format = GL_RGBA32F,
> Largest Texture Size = 16384
> GL_PROXY_TEXTURE_3D, Internal Format = GL_RGBA8, Largest
> Texture Size = 2048
> GL_PROXY_TEXTURE_3D, Internal Format = GL_RGBA16, Largest
> Texture Size = 2048
> GL_PROXY_TEXTURE_3D, Internal Format = GL_RGBA32F, Largest
> Texture Size = 2048
>
> GL_TEXTURE_1D, Internal Format = GL_RGBA8, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_1D, Internal Format = GL_RGBA16, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_1D, Internal Format = GL_RGBA32F, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_2D, Internal Format = GL_RGBA8, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_2D, Internal Format = GL_RGBA16, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_2D, Internal Format = GL_RGBA32F, Largest Texture
> Size = 16384
> Unexpected GL error: GL_INVALID_VALUE 0x501
> Unexpected GL error: GL_INVALID_VALUE 0x501
> Unexpected GL error: GL_INVALID_VALUE 0x501
> Unexpected GL error: GL_INVALID_VALUE 0x501
> Unexpected GL error: GL_INVALID_VALUE 0x501
> Unexpected GL error: GL_INVALID_VALUE 0x501
> GL_TEXTURE_CUBE_MAP, Internal Format = GL_RGBA16, Largest
> Texture Size = 16384
> [I killed it here after 10+ minutes of runtime]
>
> I don't know what's causing the INVALID_VALUE error.
>
> GL_INVALID_VALUE error is thrown by the driver when we test with
> textures larger than maximum supported size. So, this is an
> expected error.
I haven't double-checked the code, buy why are you testing a texture
size that's larger than the advertised max size?
OK, I'm looking at the loop at line 186:
for (side = maxSide - 100; side < maxSide + 2; side++) {
switch (target) {
case GL_TEXTURE_1D:
I guess I don't understand why you're doing that. It seems to me you
just have to query the max texture size (both via glGetIntegerv() and
the proxy test) then try to create a texture of that size. Testing
102 other size variations isn't necessary.
> Another expected error is GL_OUT_OF_MEMORY when driver is unable
> to map a previously created large texture in glSubTexImage1D/2D/3D.
> I will include both these errors as expected errors and put a
> comment about it.
>
> The test runs _extremely_ slowly. It looks like the driver is
> allocating some really large image buffers so we're falling
> back to virtual memory. I don't know a way around this. On
> one hand, it's good to test this, but nobody wants to wait 10+
> minutes for the outcome.
>
> The purpose of this test case is to make sure that driver doesn't
> segfault with large textures and throws relevant errors in case
> it is unable to allocate memory or map large textures. I am able
> to identify few issues in intel driver using this test case. If
> nVidia driver takes a long time to run the test, there's a
> possibility that they are not handling such cases properly. May be
> they should return GL_OUT_OF_MEMORY in that case.
> or we can comment out GL_RGBA16 and GL_RGBA32F and put a comment
> informing users about the time test takes with those formats.
Let's try taking out the 102-iteration loop and see what happens.
> I noticed below listed comment in tests/texturing/tex3d-maxsize.c
> which might explain the issue you are seeing on nvidia driver:
> [...]
> /* use proxy texture to find working max texture size */
> find_max_tex3d_size(maxsize, &width, &height, &depth);
> #ifdef NVIDIA_HACK
> /* XXX NVIDIA's proxy texture mechanism is broken.
> * If this code is enabled, a smaller texture is used and
> * the test passes. Only halving the texture size
> isn't enough:
> * we try to allocate a gigantic texture which
> typically brings
> * the machine to its knees, swapping, then dying.
> */
> width /= 4;
> height /= 4;
> depth /= 4;
> #endif
> [...]
>
I wish I'd have noted which version of NVIDIA driver that was. I
don't see any problem with driver 280.13 so we could probably remove
that code (it's disabled anyway).
-Brian
More information about the Piglit
mailing list