[Mesa-dev] [PATCH] radeonsi: enable 32-bit denormals on VI+

Roland Scheidegger sroland at vmware.com
Wed Jan 11 21:00:37 UTC 2017


Am 11.01.2017 um 21:08 schrieb Samuel Pitoiset:
> 
> 
> On 01/11/2017 07:00 PM, Roland Scheidegger wrote:
>> I don't think there's any glsl, es or otherwise, specification which
>> would require denorms (since obviously lots of hw can't do it, d3d10
>> forbids them), with any precision qualifier. Hence these look like bugs
>> of the test suite to me?
>> (Irrespective if it's a good idea or not to enable denormals, which I
>> don't realy know.)
> 
> That test works on NVIDIA hw (both with blob and nouveau) and IIRC it
> also works on Intel hw. I don't think it's buggy there.
The question then is why it needs denorms on radeons...

Roland


> 
>>
>> Roland
>>
>>
>> Am 11.01.2017 um 18:29 schrieb Samuel Pitoiset:
>>> Only VI can do 32-bit denormals at full rate while previous
>>> generations can do it only for 64-bit and 16-bit.
>>>
>>> This fixes some dEQP tests with the highp type qualifier.
>>>
>>> Bugzilla:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.freedesktop.org_show-5Fbug.cgi-3Fid-3D99343&d=DwICaQ&c=uilaK90D4TOVoH58JNXRgQ&r=_QIjpv-UJ77xEQY8fIYoQtr5qv8wKrPJc7v7_-CYAb0&m=DORsZmnfns66hjZY_OiEwB7cjwlqqJ-1spZXHa_yu7g&s=w31p8T6Q7NMLHvXzhcryng-QMyOnTbAtYbgUx1cDuhc&e=
>>> Signed-off-by: Samuel Pitoiset <samuel.pitoiset at gmail.com>
>>> ---
>>>  src/gallium/drivers/radeonsi/si_shader.c | 11 ++++++++---
>>>  1 file changed, 8 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/src/gallium/drivers/radeonsi/si_shader.c
>>> b/src/gallium/drivers/radeonsi/si_shader.c
>>> index 5dfbd6603a..e9cb11883f 100644
>>> --- a/src/gallium/drivers/radeonsi/si_shader.c
>>> +++ b/src/gallium/drivers/radeonsi/si_shader.c
>>> @@ -6361,8 +6361,10 @@ int si_compile_llvm(struct si_screen *sscreen,
>>>
>>>      si_shader_binary_read_config(binary, conf, 0);
>>>
>>> -    /* Enable 64-bit and 16-bit denormals, because there is no
>>> performance
>>> -     * cost.
>>> +    /* Enable denormals when there is no performance cost.
>>> +     *
>>> +     * Only VI can do 32-bit denormals at full rate while previous
>>> +     * generations can do it only for 64-bit and 16-bit.
>>>       *
>>>       * If denormals are enabled, all floating-point output modifiers
>>> are
>>>       * ignored.
>>> @@ -6373,7 +6375,10 @@ int si_compile_llvm(struct si_screen *sscreen,
>>>       *   have to stop using those.
>>>       * - SI & CI would be very slow.
>>>       */
>>> -    conf->float_mode |= V_00B028_FP_64_DENORMS;
>>> +    if (sscreen->b.chip_class >= VI)
>>> +        conf->float_mode |= V_00B028_FP_ALL_DENORMS;
>>> +    else
>>> +        conf->float_mode |= V_00B028_FP_64_DENORMS;
>>>
>>>      FREE(binary->config);
>>>      FREE(binary->global_symbol_offsets);
>>>
>>



More information about the mesa-dev mailing list