[Mesa-users] llvmpipe os mesa with my own llvm
Burlen Loring
burlen.loring at gmail.com
Fri May 24 16:24:46 PDT 2013
On 5/23/2013 1:09 PM, Burlen Loring wrote:
> On 5/23/2013 6:57 AM, Brian Paul wrote:
>> On 05/22/2013 04:40 PM, Burlen Loring wrote:
>>> On 5/22/2013 9:02 AM, Brian Paul wrote:
>>>> On 05/21/2013 04:43 PM, Burlen Loring wrote:
>>>>> I have recently built and installed llvmpipe os mesa using my own
>>>>> llvm
>>>>> build on a new server. Using my regression tests I've observed that
>>>>> varying LP_NUM_THREADS has no affect on performance with this
>>>>> build, but
>>>>> it had a dramatic affect on my workstation (60 sec w/ 1 thread
>>>>> down to
>>>>> 11 sec w/16 threads). The threading performance should be even
>>>>> better
>>>>> on the new server since it has vastly better cpu's each with 8
>>>>> physical
>>>>> cores vs the old 4 core cpus on my workstation. Also comparing
>>>>> outputs
>>>>> of run on my workstation and the server this pixels are ~40%
>>>>> different,
>>>>> but the result looks pretty close to the eye.
>>>>>
>>>>> I'm wondering what can explain the pixel difference and also the
>>>>> performance difference? Could it be that I'm not really using the
>>>>> llvmpipe driver? or llvm JIT compilation for my shaders? I am sure
>>>>> lvvmpipe and osmesa tracker is built and installed but I'm at a
>>>>> loss has
>>>>> to debug further. setting GALLIUM_PRINT_OPTIONS does nothing. Any
>>>>> advice
>>>>> greatly appreciated.
>>>>
>>>> As a first step you can call/print glGetString(GL_RENDERER) to verify
>>>> that you're using llvmpipe. I'm assuming this is your own app.
>>>>
>>>> -Brian
>>>>
>>>
>>> OK so far so good
>>>
>>> 1010: GL_VENDOR: VMware, Inc.
>>> 1010: GL_VERSION: 2.1 Mesa 9.2.0 (git-1e88d14)
>>> 1010: GL_RENDERER: Gallium 0.4 on llvmpipe (LLVM 3.2, 256 bits)
>>
>
> Thanks for helping with this Brian,
>
>> As for the 40% different pixels, could you post a couple small
>> screenshots to compare? Do you know if the difference comes from
>> texturing or complex fragment shaders, etc?
> I'll send it to you off list, as post is rejected because it's too
> large. The code uses a vertex shader for lighting etc and screen space
> vector transformation and a number of fragment shaders for screen
> space computation and rendering. here it's being used as part of our
> regression suite on a trivially small test dataset with 10-100's of tris.
>
>> Not sure what to say about the performance difference. If the app is
>> dominated by vertex transformation, then varying LP_NUM_THREADS might
>> not make a lot of difference.
> you mentioned that in the past, I don't think that is the case here,
> but now that you mention it I've been wondering about why aren't
> vertex ops in the programmable pipeline threaded? Does it not fit very
> similar pattern to fragment ops?
>
>> Have you compared other Mesa demos or apps to see how they behave?
> Superficially by running the regression suite. on the server it looks
> very much like the old osmesa results, certain tests have been failing
> forever with osmesa. On the workstation I have nearly perfect
> regression suite run, all the old failures are cleaned up. it's very
> exciting!!
>
> since the data and code is the same on both machines I'm expecting to
> see similar performance behavior on both and I absolutely expect pixel
> for pixel match. If the test fails it should fail on both systems.
> Unfortunately this server is Cray XC30 and one doesn't have direct
> access to the compute nodes so I can't use top to verify thread counts
> and using gdb is a pain (but it is possible). I was kind of hoping
> that you'd say "if you didn't build llvm right osmesa would fallback
> to softpipe" or something along those lines.
>
I am embarrassed to say that I had not rebased recent work onto my
branch on the server, this explains the pixel differences, they had
nothing to do with mesa.
still looking at why threading doesn't seem to be working as expected.
More information about the mesa-users
mailing list