[Mesa-dev] [PATCH v2] glsls: Modify exec_list to avoid strict-aliasing violations
Eero Tamminen
eero.t.tamminen at intel.com
Wed Jul 1 05:32:21 PDT 2015
Hi,
On 06/25/2015 04:56 PM, Davin McCall wrote:
> On 25/06/15 14:32, Eero Tamminen wrote:
>> On 06/25/2015 03:53 PM, Davin McCall wrote:
>>> On 25/06/15 12:27, Eero Tamminen wrote:
>>>> On 06/25/2015 02:48 AM, Davin McCall wrote:
>>>>> In terms of performance:
>>>>>
>>>>> (export LIBGL_ALWAYS_SOFTWARE=1; time glmark2)
>>>>
>>>> For Intel driver, INTEL_NO_HW=1 could be used.
>>>>
>>>> (Do other drivers have something similar?)
>>>
>>> Unfortunately I do not have an Intel display set up.
>>
>> If you can get libINTEL_DEVID_OVERRIDEdrm to use libdrm_intel, you can
>> fake desired
>> InteL HW to Mesa with INTEL_DEVID_OVERRIDE environment variable.
>>
>> Similarly to INTEL_NO_HW, it prevents batches from being submitted
>> to GPU.
>
> Ok, thanks, I'll look into this shortly. Any pointers on how to get
> libdrm to use libdrm_intel?
I think you need to change libdrm code, I don't think there's any
builtin support for that.
>> When testing 3D driver CPU side optimizations, one should either use
>> test specifically written for testing driver CPU overhead (proprietary
>> benchmarks have such tests) or force test-case to be CPU bound e.g.
>> with INTEL_NO_HW.
>
> Understood. The 'user' time divided by the glmark2 score should however
> give a relative indication of the CPU processing required per frame, right?
Not necessarily. When CPU isn't fully utilized, it gets downclocked and
getting the speed back up can take some time. This can be pretty bad
for programs that have fairly equal CPU & GPU loads. You really should
select, or make test-case completely CPU bound.
When you want to measure performance impact you want to avoid power
management affecting your results. In worst case you can see improved
performance although you increased CPU load, just because CPU happened
then to run at higher frequency.
- Eero
More information about the mesa-dev
mailing list