[PATCH] unstable: add HDR Mastering Meta-data Protocol

Sharma, Shashank shashank.sharma at intel.com
Thu Mar 7 10:55:39 UTC 2019


Regards
Shashank

> -----Original Message-----
> From: Graeme Gill [mailto:graeme2 at argyllcms.com]
> Sent: Thursday, March 7, 2019 7:05 AM
> To: Sharma, Shashank <shashank.sharma at intel.com>; graeme at argyllcms.com;
> Pekka Paalanen <ppaalanen at gmail.com>; Nautiyal, Ankit K
> <ankit.k.nautiyal at intel.com>
> Cc: e.burema at gmail.com; Kps, Harish Krupo <harish.krupo.kps at intel.com>;
> niels_ole at salscheider-online.de; sebastian at sebastianwick.net; wayland-
> devel at lists.freedesktop.org
> Subject: Re: [PATCH] unstable: add HDR Mastering Meta-data Protocol
> 
> Sharma, Shashank wrote:
> 
> >> From: Graeme Gill [mailto:graeme2 at argyllcms.com]
> 
> >>>> From: Ankit Nautiyal <ankit.k.nautiyal at intel.com>
> >>>>
> >>>> This protcol enables a client to send the hdr meta-data: MAX-CLL,
> >>>> MAX-FALL, Max Luminance and Min Luminance as defined by SMPTE ST.2086.
> >>
> >> Hmm. I'm wondering what the intent of this is.
> 
> > The main reason is to pass it to the compositor so that it can do the
> > tone mapping properly. As you know, there could be cases where we are
> > dealing with one HDR and multiple SDR frames. Now, the HDR content
> > comes with its own metadata information, which needs to be passed to
> > the display using the AVI infoframes, for the correct HDR experience,
> > but we need to make others non-HDR frames also compatible to this brightness
> range of metadata, so we do tone mapping.
> 
> I've been thinking a bit about that, and currently my view is that this is not the best way
> of arranging things. MAX-CLL and MAX-FALL are specific to certain current source of
> HD imagery, and not all HDR sources will have those specific numbers, and video
> standards may change in the future (i.e. frame by frame meta-data). Exactly how they
> can be used is unclear - i.e. they are observations from a video stream, not parameters
> for a specific tone mapping. So punting them to the Compositor doesn't seem such a
> good approach.

I kindof agree on this point, that, the usage model is not very clear right now, and this stack and usage model will mature with time. But I would like to still keep this here, for following reasons:
 - If you closely see the HDMI AVI infoframes specified in CEA-861-G spec, the HDR metadata section is expecting MAX-CLL and MAX-FALL, this means many monitor might be using this already to provide you better viewing experience. So if a content can set these values, no harm in hand carrying that. 
- The tone mapping API implemented by libVA expects this data, as they are using it to weight the brightness values in frame. So the compositor will pass these values to libVA too.  
> 
> Instead what I'd suggest is providing some concrete tone mapping operators for HDR
> handling (at least for V1), with well defined behavior and parameters. The video
> playback application can do the conversion from source format specific observations
> like MAX-CLL and MAX-FALL into the specific compositor tone mapping parameters,
> or it has the option of determining those parameters some other way (user setting
> perhaps, or it's own analysis of the video stream), or it can even do the adaptation of
> the source to the display itself, since it has access to the display characteristics via the
> ICC profile.
> 
> I think it's quite possible to have the Compositor do sane HDR conversions with no
> other extra information than the HDR nominal diffuse white brightness for each HDR
> profile, and perhaps this could be a base Compositor tone mapping behavior, with
> other tone mapping algorithms (and their associated parameters) added as they are
> identified and defined or standardized.
> 
Yep, this is very close to what we are doing in our compositor code.
 
> >> If the idea is that somehow the compositor is to do dynamic HDR
> >> rendering, then I have my doubts about its suitability and
> >> practicality, unless you are prepared to spell out in detail the algorithm it should
> use for performing this.
> >>
> > The GL tone mapping algorithms will be opensource methods, and will be
> > available for the code review. HW tone mapping is using the libVA interface and
> media-driver handler.
> 
> Right, but there is a distinction between a particular implementation and a Compositor
> specification.
> 
> Can you write a specification of how the tone curves you are proposing to use work, in
> such a way that someone else can implement them, and that someone dealing with
> other HDR sources can use them ?
Sure, in fact, soon we will publish the code, and you can have a look and provide us feedback.

> (i.e. HDR photographs or synthetic imagery from games etc.)
> 
> > Right now we are not targeting this, but eventually we want to be
> > there. As we all know, it's a huge area to cover in an single attempt,
> > and gets very cumbersome. So the plan is to first implement a modular
> > limited capability HDR video playback, and  use that as a driving and
> > testing vehicle for color management, and then later enhance it with adding more
> and more modules.
> 
> Sure. I'm a little worried though, that video specific HDR considerations will end up
> getting set in stone, and hamper more general HDR compatibility.

I completely understand this concern, and it seems genuine too. But honestly, seeing the scope of complete work I feel like this is a Himalayan task, and modular approach is one practical way of achieving something gradually. With small and modular targets, we can slowly expand the stack, and we can try to make it as generic as it gets. I am pretty sure and hopeful that with inputs like we got here, and with appropriate code review, we should come  up with a promising software stack. 

- Shashank 
> 
> Cheers,
> 
> Graeme Gill.


More information about the wayland-devel mailing list