[RFC PATCH 1/3] drm/color: Add RGB Color encodings
Pekka Paalanen
ppaalanen at gmail.com
Fri Apr 30 09:04:02 UTC 2021
On Mon, 26 Apr 2021 22:08:55 +0300
Ville Syrjälä <ville.syrjala at linux.intel.com> wrote:
> On Mon, Apr 26, 2021 at 02:56:26PM -0400, Harry Wentland wrote:
> > On 2021-04-26 2:07 p.m., Ville Syrjälä wrote:
> > > On Mon, Apr 26, 2021 at 01:38:50PM -0400, Harry Wentland wrote:
> > >> From: Bhawanpreet Lakha <Bhawanpreet.Lakha at amd.com>
> > >>
> > >> Add the following color encodings
> > >> - RGB versions for BT601, BT709, BT2020
> > >> - DCI-P3: Used for digital movies
> > >>
> > >> Signed-off-by: Bhawanpreet Lakha <Bhawanpreet.Lakha at amd.com>
> > >> Signed-off-by: Harry Wentland <harry.wentland at amd.com>
> > >> ---
> > >> drivers/gpu/drm/drm_color_mgmt.c | 4 ++++
> > >> include/drm/drm_color_mgmt.h | 4 ++++
> > >> 2 files changed, 8 insertions(+)
> > >>
> > >> diff --git a/drivers/gpu/drm/drm_color_mgmt.c b/drivers/gpu/drm/drm_color_mgmt.c
> > >> index bb14f488c8f6..a183ebae2941 100644
> > >> --- a/drivers/gpu/drm/drm_color_mgmt.c
> > >> +++ b/drivers/gpu/drm/drm_color_mgmt.c
> > >> @@ -469,6 +469,10 @@ static const char * const color_encoding_name[] = {
> > >> [DRM_COLOR_YCBCR_BT601] = "ITU-R BT.601 YCbCr",
> > >> [DRM_COLOR_YCBCR_BT709] = "ITU-R BT.709 YCbCr",
> > >> [DRM_COLOR_YCBCR_BT2020] = "ITU-R BT.2020 YCbCr",
> > >> + [DRM_COLOR_RGB_BT601] = "ITU-R BT.601 RGB",
> > >> + [DRM_COLOR_RGB_BT709] = "ITU-R BT.709 RGB",
> > >> + [DRM_COLOR_RGB_BT2020] = "ITU-R BT.2020 RGB",
> > >> + [DRM_COLOR_P3] = "DCI-P3",
> > >
> > > These are a totally different thing than the YCbCr stuff.
> > > The YCbCr stuff just specifies the YCbCr<->RGB converison matrix,
> > > whereas these are I guess supposed to specify the primaries/whitepoint?
> > > But without specifying what we're converting *to* these mean absolutely
> > > nothing. Ie. I don't think they belong in this property.
> > >
> >
> > If this is the intention I don't see it documented.
> >
> > I might have overlooked something but do we have a way to explicitly
> > specify today what *to* format the YCbCr color encodings convert into?
>
> These just specific which YCbCr<->RGB matrix to use as specificed
> in the relevant standards. The primaries/whitepoint/etc. don't
> change at all.
Ville is correct here.
> > Would that be a combination of the output color encoding specified via
> > colorspace_property and the color space encoded in the primaries and
> > whitepoint of the hdr_output_metadata?
Conversion between YCbCR and RGB is not a color space conversion in the
sense of color spaces (chromaticity of primaries and white point). It
is a color model conversion or a color encoding conversion more like.
A benefit of YCbCr is that you can use less bandwidth to transmit the
same image and people won't realise that you lost anything: chroma
sub-sampling. Sub-sampling with RGB wouldn't work that well. It's a
lossy compression technique, but different standards use different
compression algorithms (well, matrices) to balance what gets lost.
> Those propertis only affect the infoframes. They don't apply any
> color processing to the data.
Indeed.
An example:
You start with YUV video you want to display. That means you have YCbCr
data using color space X and EOTF Foo. When you convert that to RGB,
the RGB data still has color space X and EOTF Foo. Then you use the
infoframe to tell your monitor that the data is in color space X and
using EOTF Foo.
At no point in that pipeline there is a color space transformation,
until the data actually reaches the monitor which may do magic things
to map color space X and EOTF Foo into what it can actually make as
light.
Or as with the traditional way, you don't care what color space or EOTF
your video uses or your monitor has. You just hope they are close
enough to look good enough that people don't see anything wrong. Close
your eyes and sing a happy song. With HDR and WCG, that totally breaks
down.
> > Fundamentally I don't see how the use of this property differs, whether
> > you translate from YCbCr or from RGB. In either case you're converting
> > from the defined input color space and pixel format to an output color
> > space and pixel format.
>
> The gamut does not change when you do YCbCr<->RGB conversion.
Right. Neither does dynamic range.
> > > The previous proposals around this topic have suggested a new
> > > property to specify a conversion matrix either explicitly, or
> > > via a separate enum (which would specify both the src and dst
> > > colorspaces). I've always argued the enum approach is needed
> > > anyway since not all hardware has a programmable matrix for
> > > this. But a fully programmable matrix could be nice for tone
> > > mapping purposes/etc, so we may want to make sure both are
> > > possible.
> > >
> > > As for the transfer func, the proposals so far have mostly just
> > > been to expose a programmable degamma/gamma LUTs for each plane.
> > > But considering how poor the current gamma uapi is we've thrown
> > > around some ideas how to allow the kernel to properly expose the
> > > hw capabilities. This is one of those ideas:
> > > https://lists.freedesktop.org/archives/dri-devel/2019-April/212886.html>> I think that basic idea could be also be extended to allow fixed
> > > curves in case the hw doesn't have a fully programmable LUT. But
> > > dunno if that's relevant for your hw.
> > >
> >
> > The problem with exposing gamma, whether per-plane or per-crtc, is that
> > it is hard to define an API that works for all the HW out there. The
> > capabilities for different HW differ a lot, not just between vendors but
> > also between generations of a vendor's HW.
> >
> > Another reason I'm proposing to define the color space (and gamma) of a
> > plane is to make this explicit. Up until the color space and gamma of a
> > plane or framebuffer are not well defined, which leads to drivers
> > assuming the color space and gamma of a buffer (for blending and other
> > purposes) and might lead to sub-optimal outcomes.
>
> The current state is that things just get passed through as is
> (apart from the crtc LUTs/CTM).
Right. I would claim that the kernel does not even want to know about
color spaces or EOTFs. Instead, the kernel should offer userspace ways
to program the hardware to do the color *transformations* it wants to
do. The color_encoding KMS property is already like this: it defines
the conversion matrix, not what the input or output are.
Infoframes being sent to displays are a different thing. They just tell
the monitor what kind of image data userspace has configured KMS to
send it, but does not change what KMS actually does with pixels.
Also, please, let's talk about EOTF and EOTF^-1 instead of gamma when
appropriate.
Electro-optical transfer function (EOTF) is very clear in what it
means: it is the mapping from electrical values (the non-linear pixel
values you are used to, good for storage and transmission) to optical
values (values that are linear in light intensity, therefore good for
things like blending and filtering).
Gamma is kind of the same, but when you use it in sentences it easily
becomes ambiguous. Like if you have "gamma corrected pixels", what does
that mean. Are they electrical values or optical values or maybe
electrical values with a different EOTF. What EOTF.
However, in KMS "gamma LUT" is kind of standardised terminology, and it
does not need to be an EOTF or inverse-EOTF. One can use a gamma LUT
for EETF, mapping from one EOTF to another, e.g. from data with content
EOTF to data with monitor EOTF.
Thanks,
pq
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20210430/f66b18b1/attachment.sig>
More information about the amd-gfx
mailing list