The state of Quantization Range handling

Sebastian Wick sebastian.wick at redhat.com
Mon Nov 14 23:11:56 UTC 2022


There are still regular bug reports about monitors (sinks) and sources
disagreeing about the quantization range of the pixel data. In
particular sources sending full range data when the sink expects
limited range. From a user space perspective, this is all hidden in
the kernel. We send full range data to the kernel and then hope it
does the right thing but as the bug reports show: some combinations of
displays and drivers result in problems.

In general the whole handling of the quantization range on linux is
not defined or documented at all. User space sends full range data
because that's what seems to work most of the time but technically
this is all undefined and user space can not fix those issues. Some
compositors have resorted to giving users the option to choose the
quantization range but this really should only be necessary for
straight up broken hardware.

Quantization Range can be explicitly controlled by AVI InfoFrame or
HDMI General Control Packets. This is the ideal case and when the
source uses them there is not a lot that can go wrong. Not all
displays support those explicit controls in which case the chosen
video format (IT, CE, SD; details in CTA-861-H 5.1) influences which
quantization range the sink expects.

This means that we have to expect that sometimes we have to send
limited and sometimes full range content. The big question however
that is not answered in the docs: who is responsible for making sure
the data is in the correct range? Is it the kernel or user space?

If it's the kernel: does user space supply full range or limited range
content? Each of those has a disadvantage. If we send full range
content and the driver scales it down to limited range, we can't use
the out-of-range bits to transfer information. If we send limited
range content and the driver scales it up we lose information.

Either way, this must be documented. My suggestion is to say that the
kernel always expects full range data as input and is responsible for
scaling it to limited range data if the sink expects limited range
data.

Another problem is that some displays do not behave correctly. It must
be possible to override the kernel when the user detects such a
situation. This override then controls if the driver converts the full
range data coming from the client or not (Default, Force Limited,
Force Full). It does not try to control what range the sink expects.
Let's call this the Quantization Range Override property which should
be implemented by all drivers.

All drivers should make sure their behavior is correct:

* check that drivers choose the correct default quantization range for
the selected mode
* whenever explicit control is available, use it and set the
quantization range to full
* make sure that the hardware converts from full range to limited
range whenever the sink expects limited range
* implement the Quantization Range Override property

I'm volunteering for the documentation, UAPI and maybe even the drm
core parts if there is willingness to tackle the issue.

Appendix A: Broadcast RGB property

A few drivers already implement the Broadcast RGB property to control
the quantization range. However, it is pointless: It can be set to
Auto, Full and Limited when the sink supports explicitly setting the
quantization range. The driver expects full range content and converts
it to limited range content when necessary. Selecting limited range
never makes any sense: the out-of-range bits can't be used because the
input is full range. Selecting Default never makes sense: relying on
the default quantization range is risky because sinks often get it
wrong and as we established there is no reason to select limited range
if not necessary. The limited and full options also are not suitable
as an override because the property is not available if the sink does
not support explicitly setting the quantization range.



More information about the wayland-devel mailing list