[RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC

Harry Wentland harry.wentland at amd.com
Mon Aug 16 12:40:14 UTC 2021



On 2021-08-16 7:10 a.m., Brian Starkey wrote:
> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
>> Hello Brian,
>> (+Uma in cc)
>>
>> Thanks for your comments, Let me try to fill-in for Harry to keep the design
>> discussion going. Please find my comments inline.
>>

Thanks, Shashank. I'm back at work now. Had to cut my trip short
due to rising Covid cases and concern for my kids.

>> On 8/2/2021 10:00 PM, Brian Starkey wrote:
>>>
> 
> -- snip --
> 
>>>
>>> Android doesn't blend in linear space, so any API shouldn't be built
>>> around an assumption of linear blending.
>>>

This seems incorrect but I guess ultimately the OS is in control of
this. If we want to allow blending in non-linear space with the new
API we would either need to describe the blending space or the
pre/post-blending gamma/de-gamma.

Any idea if this blending behavior in Android might get changed in
the future?

>>
>> If I am not wrong, we still need linear buffers for accurate Gamut
>> transformation (SRGB -> BT2020 or other way around) isn't it ?
> 
> Yeah, you need to transform the buffer to linear for color gamut
> conversions, but then back to non-linear (probably sRGB or gamma 2.2)
> for actual blending.
> 
> This is why I'd like to have the per-plane "OETF/GAMMA" separate
> from tone-mapping, so that the composition transfer function is
> independent.
> 
>>
> 
> ...
> 
>>>> +
>>>> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe
>>>> +how to scale the :ref:`SDR luminance`.
>>>> +
>>>> +Tonemapping could also include the ability to use a 3D LUT which might be
>>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to
>>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates
>>>> +in perceptual (non-linear) space, so as to evenly spread the limited
>>>> +entries evenly across the perceived space.
>>>
>>> Some terminology care may be needed here - up until this point, I
>>> think you've been talking about "tonemapping" being luminance
>>> adjustment, whereas I'd expect 3D LUTs to be used for gamut
>>> adjustment.
>>>
>>
>> IMO, what harry wants to say here is that, which HW block gets picked and
>> how tone mapping is achieved can be a very driver/HW specific thing, where
>> one driver can use a 1D/Fixed function block, whereas another one can choose
>> more complex HW like a 3D LUT for the same.
>>
>> DRM layer needs to define only the property to hook the API with core
>> driver, and the driver can decide which HW to pick and configure for the
>> activity. So when we have a tonemapping property, we might not have a
>> separate 3D-LUT property, or the driver may fail the atomic_check() if both
>> of them are programmed for different usages.
> 
> I still think that directly exposing the HW blocks and their
> capabilities is the right approach, rather than a "magic" tonemapping
> property.
> 
> Yes, userspace would need to have a good understanding of how to use
> that hardware, but if the pipeline model is standardised that's the
> kind of thing a cross-vendor library could handle.
> 

One problem with cross-vendor libraries is that they might struggle
to really be cross-vendor when it comes to unique HW behavior. Or
they might pick sub-optimal configurations as they're not aware of
the power impact of a configuration. What's an optimal configuration
might differ greatly between different HW.

We're seeing this problem with "universal" planes as well.

> It would definitely be good to get some compositor opinions here.
> 

For this we'll probably have to wait for Pekka's input when he's
back from his vacation.

> Cheers,
> -Brian
> 



More information about the dri-devel mailing list