[PATCH RFC 3/6] drm: Plane YCbCr to RGB conversion related properties

Ville Syrjälä ville.syrjala at linux.intel.com
Mon Apr 24 15:13:04 UTC 2017


On Mon, Apr 24, 2017 at 04:21:39PM +0300, Jyri Sarha wrote:
> On 04/21/17 16:52, Ville Syrjälä wrote:
> > On Fri, Apr 21, 2017 at 04:39:16PM +0300, Jyri Sarha wrote:
> >> On 04/21/17 14:17, Ville Syrjälä wrote:
> >>>> +static char *ycbcr_to_rgb_mode_name[] = {
> >>>> +	[DRM_PLANE_YCBCR_BT601_LIM_TO_RGB_BT601_FULL] =
> >>>> +		"YCbCr BT.601 limited range TO RGB BT.601 full range",
> >>>> +	[DRM_PLANE_YCBCR_BT601_FULL_TO_RGB_BT601_FULL] =
> >>>> +		"YCbCr BT.601 full range TO RGB BT.601 full range",
> >>>> +	[DRM_PLANE_YCBCR_BT709_LIM_TO_RGB_BT709_FULL] =
> >>>> +		"YCbCr BT.709 limited range TO RGB BT.709 full range",
> >>>> +	[DRM_PLANE_YCBCR_BT2020_LIM_TO_RGB_BT2020_FULL] =
> >>>> +		"YCbCr BT.2020 limited range TO RGB BT.2020 full range",
> >>>> +	[DRM_PLANE_YCBCR_BT601_LIM_TO_RGB_BT709_FULL] =
> >>>> +		"YCbCr BT.601 limited range TO RGB BT.709 full range",
> >>> We probably don't want to conflate the YCbCr->RGB part with the colorspace
> >>> conversion because the YCbCr->RGB part should be performed on gamma encoded
> >>> data and the colorspace conversion on linear data. So we need a degamma
> >>> stage in between. At least that seemed to be the general concencus after
> >>> the last round of mails on this topic.
> >>>
> >>
> >> I do not really have the expertise to argue with that. I merely copied
> >> the idea from the mail thread I referred to in the cover letter.
> >> However, there are several display HWs out there that do not have all
> >> bolts and knobs to make the color-space conversion in exactly the ideal
> >> order, omap DSS being one of them.
> > 
> > Yeah. Intel hardware is in the same boat for the time being. On current
> > hw I think we can only really expose the YCbCr->RGB and degamma stages.
> > 
> > On some limited set of platforms we could expose a blob as well, and I
> > suppose it would then be possible to use it for color space conversion
> > if you ignore gamma and/or only deal with linear RGB data. But it's such
> > a limited subset of hardware for us that I don't think I'm interested
> > in exposing it.
> > 
> > In the future we should be getting a more fully fleged pipeline.
> > 
> 
> If going forward with these patches, then maybe it is better to stick
> with the enum options that we are sure of, e.g. BT.601, BT.709, and
> BT.2020 to full range RGB. It is easy enough to add more enum values in
> the future.
> 
> What is more important is the naming approach, whether we keep the
> conversion mode naming explicit about the output format, or just specify
> the output format implicitly to be always full range (non linear) RGB?
> 
> >>
> >>> After staring at the v4l docs on this stuff I kinda like their
> >>> "encoding" terminology to describe the YCbCr->RGB part, so I'm now a
> >>> little partial to calling the prop something like YCBCR_ENCODING. OTOH
> >>
> >> I guess this property should be called YCBCR_DECODING.
> > 
> > Only if you think of it as a verb.
> > 
> 
> No strong opinions here. I am fine with any of the names that have been
> suggested so far.
> 
> >>
> >>> if we want to expose the raw matrix as a blob then maybe calling it a
> >>> CSC might be better. Not sure there's much point in exposing it though.
> >>
> >> In my first version it was called just CSC, but then I wanted to make it
> >> explicit what this CSC was used for to avoid mixing the YCbCr decoding
> >> matrix with YCbCr encoding matrix. At least on OMAP DSS we have pieces
> >> of HW that can do only one or the other, e.g. the offset calculations
> >> are supported only to one direction.
> > 
> > Are you planning to do RGB->YCbCr conversion in the plane as well? I
> > think we'll be only doing that at crtc/connector level.
> > 
> 
> No, but we need such a property to CRTC output, and there we certainly
> need to expose the CSC, because we do not have CTM (before gamma)
> matrix, and may have to use the output CSC in its place.

We're sort of in the same boat. We have just one matrix currently, but
the gamma can be either before, after, or both. So mixing CTM and YCbCr
output is a bit of a challenge, especially when gamma is involved.
Shashank has recently posted a patch set to do YCbCr output with i915.

So what we're going to be doing is using our matrix as the output CSC
or CTM as needed. And if the user asks to use both, well, then we get
to either multiply the matrices together (assuming user didn't want
some gamma in between) or we just refuse the entire thing. So I think
better to just expose the standard properties and map the hardware
resources to those dynamically. Otherwise there's going to be too many
ways to do things, and that just leads to confusion for userspace.

> 
> >>
> >>> I don't think most people are in the habit if cooking up new ways to
> >>> encode their pixel data.
> >>>
> >>
> >> In the embedded side I can imagine there could be some custom appliances
> >> where one may want to do some custom thing with the CSC and not needing
> >> a custom kernel for that could make a life easier... but then again I am
> >> not really an expert in this area.
> > 
> > I would assume most customy things you'd do in the crtc (eg. color
> > correction and whatnot). But could be that I just lack imagination.
> > 
> 
> I am on thin ice here :), but surely there is no harm in specifying an
> optional property for exposing the CSC as many display HWs provide a way
> to set an arbitrary CSC.

Well, if we don't have a clear use case we're more likely to specify it
wrong. And when the real use case appears we might discover that what we
specified isn't good enough. Hence I would avoid leaning forward too
heavily.

> 
> What about the uapi structs? In the patch there is an explicit struct
> naming each coefficient for what they are for in YCbCr to RGB
> conversion. Is this Ok, or should we go with a generic (CTM style)
> matrix, that could be used for RGB to YCbCr conversion too?

Not sure what we're talking about here, but like I said I think we
should stick to a fairly limited set of standard props and just have
each driver map the hardware resources to them as best they can.

If you just have csc+(de)gamma then I guess it might make sense
to just expose the YCbCr->RGB and degamma. If you have
degamma+csc+gamma then it might make sense to expose both
YCbCr->RGB, degamma, CTM, and gamma, and just refuse any
combination that can't be done. Eg. can't do YCbCr->RGB if
degamma is used, and YCbCr->RGB + CTM would require multiplying
the matrices together which you may or may not want to bother
with, I guess we could try to put some matrix math helpers into
the core to make such things less painful for drivers?

-- 
Ville Syrjälä
Intel OTC


More information about the dri-devel mailing list