[Mesa-dev] Determinism in the results of llvmpipe?

Andrew A. andj2223 at gmail.com
Tue Nov 29 14:00:56 UTC 2016


On Fri, Nov 18, 2016 at 2:58 AM, Jose Fonseca <jfonseca at vmware.com> wrote:
> On 17/11/16 07:37, Andrew A. wrote:
>>
>> Hello,
>>
>> I'm using Mesa's software renderer for the purposes of regression
>> testing in our graphics software. We render various scenes, save a
>> screencap of the framebuffer for each scene, then compare those
>> framebuffer captures to previously known-good captures.
>>
>> Across runs of these tests on the same hardware, the results seem to
>> be 100% identical. When running the same tests on a different machine,
>> results are *slightly* different. It's very similar within a small
>> tolerance, so this is still usable. However, I was hoping for fully
>> deterministic behavior, even if the hardware is slightly different.
>> Are there some compile time settings or some code that I can change to
>> get Mesa's llvmpipe renderer/rasterizer to be fully deterministic in
>> its output?
>>
>> I'm using llvmpipe, and these are the two different CPUs I'm using to
>> run the tests:
>> Intel(R) Xeon(R) CPU E3-1275 v3
>> Intel(R) Xeon(R) CPU X5650
>
>
>>
>> Thanks,
>>
>> Andrew
>> _______________________________________________
>> mesa-dev mailing list
>> mesa-dev at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>
>
> llvmpipe changes its behavior in _runtime_ based on the CPU features (like
> SSE AVX, AVX2, etc.)
>
>
> You could hack u_cpu_detect.c and LLVM source code to mask away CPU extra
> features ans reduce the perceived CPUID flags to the common denominator.
>
> In fact, for the two CPUs you mention above, the differences probably go
> away if you set this environment variable:
>
>   LP_NATIVE_VECTOR_WIDTH=128
>
> as it will force llvmpipe to ignore AVX/AVX2/FMA/F16C.
>
>
> But probably the best is to use x86 virtualization to clamp CPUID and do
> that.  Having a virtual machine image will also solve the problem of
> ensuring all runtime is the same, etc.
>
>
> https://software.intel.com/en-us/articles/intel-software-development-emulator
> can also do the same without virtualization (via bianry translation), but it
> might impact performance.
>
>
> Jose

Thanks for all the help. Forcing 128 did indeed produce deterministic
results, but slowed things down enough that I just opted to set up
another machine with AVX for my purposes.

After having done this, I ran into an odd problem: On a scene that
uses a BC1 texture (S3TC_DXT1 in OGL terms), I see strange artifacts
on one machine, whereas I do not on another machine.

Intel(R) Xeon(R) CPU E5-2673 v3 (Running in a VM) has these strange
artifacts (Note around the second "D") -
http://imgur.com/a/sUPVF

Intel(R) Xeon(R) CPU E3-1275 v3 (Running bare metal) has no such artifacts -
http://imgur.com/a/mONF5

Any hunches on what can cause these kinds of artifacts? I do not
notice these problems with other (non compressed) texture formats that
I've tried.

I'm using mesa 05533ce and llvm 62c10d6.

Thanks,

Andrew
-------------- next part --------------
A non-text attachment was scrubbed...
Name: llvmpipe-S3TC_DXT1-Xeon-E5-2673-v3.png
Type: image/png
Size: 134633 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20161129/f6e1c7f9/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: llvmpipe-S3TC_DXT1-Xeon-E3-1275-v3.png
Type: image/png
Size: 106228 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20161129/f6e1c7f9/attachment-0003.png>


More information about the mesa-dev mailing list