[Piglit] Piglit tests that leave noise in dmesg on nvidia proprietary drivers

Dan Kegel dank at kegel.com
Thu Oct 20 19:56:40 UTC 2016


I guess the underlying question is "how much work would it
be to get piglit to the point where errors it reports when
run against nvidia's proprietary drivers are actual problems in those drivers".
I suspect it'd be a good summer project.

On Thu, Oct 20, 2016 at 12:39 PM, Ilia Mirkin <imirkin at alum.mit.edu> wrote:
> On Thu, Oct 20, 2016 at 3:27 PM, Dan Kegel <dank at kegel.com> wrote:
>> Hi folks,
>> a few piglit tests seem to leave a little noise in dmesg when run on
>> nvidia proprietary drivers.
>> Is this expected?
>>
>> 1)
>>
>> dmesg output:
>> texsubimage[27980]: segfault at 20e0d20 ip 00007fc0492ce5d5 sp
>> 00007ffc97347800 error 4 in
>> libnvidia-glcore.so.370.28[7fc0481ad000+13bd000]
>>
>> reproducers:
>> piglit/bin/texsubimage array pbo -auto -fbo
>> piglit/bin/texsubimage array -auto -fbo
>>
>> 2)
>>
>> dmesg output:
>> ext_texture_for[28147]: segfault at 0 ip 00007f35dc1ca385 sp
>> 00007ffd0b13dda8 error 4 in libc-2.23.so[7f35dc07d000+1c0000]
>>
>> reproducer:
>> piglit/bin/ext_texture_format_bgra8888-api-errors -auto -fbo
>>
>> 3)
>>
>> dmesg output:
>> glslparsertest[28167]: segfault at 8 ip 00007fad84a0bb1e sp
>> 00007ffe20214850 error 4 in
>> libnvidia-glcore.so.370.28[7fad8463b000+13bd000]
>>
>> reproducers:
>> piglit/bin/glslparsertest
>> piglit/tests/spec/glsl-1.10/compiler/void/void-equal.vert fail 1.10
>> piglit/bin/glslparsertest
>> piglit/tests/spec/glsl-1.10/compiler/void/void-lt.vert fail 1.10
>>
>> 4)
>>
>> dmesg output: traps: glslparsertest[28205] trap divide error
>> ip:7f871d7db4a3 sp:7ffc1b0064e8 error:0 in
>> libnvidia-glcore.so.370.28[7f871d4ba000+13bd000]
>>
>> reproducers:
>> piglit/bin/glslparsertest
>> piglit/tests/spec/glsl-1.10/preprocessor/divide-by-zero.vert fail 1.10
>> piglit/bin/glslparsertest
>> piglit/tests/spec/glsl-1.10/preprocessor/modulus-by-zero.vert fail
>> 1.10
>
> It's expected that the Linux Kernel makes a note in dmesg when it
> kills a process as a result of a signal (like SIGSEGV or SIGILL or
> whatever). In all but case 2, the IP causing the fault is inside the
> NVIDIA proprietary driver. I suspect that the piglit test for case 2
> isn't sufficiently resilient to drivers doing odd things, but who
> knows. (Or it could be the NVIDIA blob calling into libc with bs.)
>
> Not sure if that answers your question...
>
>   -ilia


More information about the Piglit mailing list