HDR support in Wayland/Weston

Sharma, Shashank shashank.sharma at intel.com
Fri Jan 18 03:35:53 UTC 2019



On 1/17/2019 5:33 PM, Pekka Paalanen wrote:
> On Wed, 16 Jan 2019 09:25:06 +0530
> "Sharma, Shashank" <shashank.sharma at intel.com> wrote:
>> On 1/14/2019 6:51 PM, Pekka Paalanen wrote:
>>> On Thu, 10 Jan 2019 20:32:18 +0530
>>> "Sharma, Shashank" <shashank.sharma at intel.com> wrote:
>>>> Hello All,
>>>> This mail is to propose a design for enabling HDR support in
>>>> Wayland/Weston stack, using display engine capabilities, and get more
>>>> feedback and input from community.
> *snip*
>>> I understand your aim is to leverage display hardware capabilities to
>>> the fullest, but we must also consider hardware that lacks some or all
>>> of the conversion/mapping/other features while the monitor is well
>>> HDR-capable. We also need to consider what happens when a monitor is
>>> not HDR-capable or is somehow lacking. OTOH, whether a compositor
>>> implements HDR support at all would be obvious in the advertised
>>> Wayland globals and pixel formats.
>> Very valid point. We have given good thoughts on how to handle such
>> scenarios when we are getting into, and we can come-up with some kind of
>> compositor policies which will decide if HDR video playback should be
>> allowed or not, depending on the combination of Content type, SW
>> stack(compositor and kernel), HW capabilities and connected Monitor
>> capabilities. A sample such policy may look like (also attached as txt
>> file, just in case if this table gets distorted):
>> +------------------------------------------------+----------------------------------+
>> |Content |SW (C|K)    |HW         |Monitor       | HDR Playback
> *clip*
> Talking in terms of "allowed" and "not allowed" sounds very much like
> we would be needing "complicated" Wayland protocol to let applications
> fail gracefully at runtime, letting them know dynamically when things
> would work or not work. I believe we could do much simpler in protocol
> terms as follows:
> Does a compositor advertise HDR support extensions at all?
> - This would depend on the compositor implementation obviously, cannot
>    advertise anything without.
> - Optionally, it could depend on the graphics card hardware/driver
>    capabilities: if there is no card that could support HDR, then there
>    is no reason advertise HDR support through Wayland, because it would
>    always fall back to conversion to SDR. However, note that GPU hotplug
>    might be a thing, which might bring HDR support later at runtime.
> - Third, optionally again, a compositor might choose to not advertise
>    HDR support if it knows it will never had a HDR-capable monitor
>    attached. This is much longer stretch, and probably only for embedded
>    devices you cannot plug arbitrary monitors to.
> Once the Wayland HDR-related extensions have been advertised to clients
> at runtime, taking them away will be hard. You may want to consider to
> never revoke the extension interfaces if they have once been published
> in the lifetime of a compositor instance, because revoking Wayland
> globals has some caveats, mostly around clients still using HDR
> extensions until they actually notice the compositor wants to redact
> them. It can be made to work, but I'm not sure what the benefit would
> be.
> So, once a compositor advertises the extensions, they have to keep on
> working at all times. Specifically this means, that if a client has
> submitted a frame in HDR, and the compositor suddenly loses the ability
> to physically display HDR, e.g. the only HDR monitor gets unplugged and
> only SDR monitors remain, the compositor must still be able to show the
> window that has HDR content lingering. So converting HDR to SDR
> on-demand is a mandatory feature.
> The allowed vs. not allowed is only applicable with respect to what
> capabilities the compositor has advertised.
> A client is not "allowed" to submit HDR content if the compositor does
> not expose the Wayland extensions. Actually this is not about allowing,
> but being able to submit HDR content at all: if the interfaces are not
> advertised, a client simply has no interface to poke at.
> Pixel formats, color spaces, and so on are more interesting. The
> compositor should advertise what it supports by enumerating them
> explicitly or saying what description formats it supports. Then a
> client cannot use anything outside of those; if it attempts to, that
> will be a fatal protocol error, not a recoverable failure.
> If a client does everything according to what a compositor advertises,
> it must "work": the compositor must be able to consume the client
> content regardless of what will happen next, e.g. with monitor
> hot-unplug, or scenegraph changing such that overlay plane is no longer
> usable. This is why the fallback path through GL-renderer must exist,
> and it must be able to do HDR->SDR mapping, and so on.
> In summary, rather than dynamically allow or not allow, a compositor
> needs to live by its promise on what works. It cannot pull the rug from
> under a client by suddenly hiding the window or showing it corrupted.
> That is the baseline in behaviour.
Makes a lot of sense, and sounds like a stable design too.
As per this design policy, while coming up, compositor can analyze the 
static environment conditions like:
- HW support for HDR
- Kernel support for HDR

and based on these two it can decide to advertise the HDR capabilities 
via the protocol, and once it does so, it has to make sure that it lives 
up-to expectation.
Now, the only variable in the environment is a hot-pluggable monitor, 
which might/might not support HDR playback, and that needs to be handled 
at runtime by:
- Doing H2S tone mapping, if monitor can't support HDR playback.
- and REC2020->REC709 conversion using CSC.
- using GL blending as fallback if we are not able to prepare a plane 
only state.

Does it seem like correct interpretation of your suggestions ?
> Then we can have some mechanisms in place to inform clients about
> changed conditions, like sending HDR content becomes useful or stops
> being useful. The first mechanism here is the wl_surface.enter/leave
> events: if the wl_surface enters the first HDR output, or leaves the
> last HDR output, the client will know and may adapt if it wants to.
Again, sounds like a good idea.
>> Also,
>> - HW can always declare its capability/preference to handle HDR playback
>> using something like a DRM cap or similar.
>> - We can define what to do for fallback method, if a run-time plane
>> assignment fails during atomic commit,  and so on.
>> IMHO, With gradual discussions in community about how and what should
>> the compositor do, i think we all can come up with how should these
>> policies look like.
> *snip*
>>> Since you need a proving vehicle for new kernel UABI, this may be
>>> difficult.
>> I know, and honestly, I was planning to tweak the compositor policies to
>> prefer GL over Overlays while plane assignment. We know that HDR
>> playback is an advance playback scenario, and we know that many HWs are
>> adding special capabilities in their graphics engine, just to handle
>> HDR, so why not use these capabilities and prefer HW overlays over GL,
>> when it comes specifically to HDR playback ? A policy which checks
>> something like this, while plane assignment:
>> - If (any of the views in the list of views is HDR surface && HW is
>> capable and interested)
>>       try_preparing_plane_only_state()
>>    if (din't_work)
>>      try_falling_for_renderer()
> Yes, that is doable and good. In fact, that is what Weston's
> DRM-backend does today already. It does not look at HDR status
> obviously, but it does very much attempt a planes-only configuration
> first, leaving the renderer unused if possible. It just that more often
> than not, the planes-only mode doesn't work out and the compositor
> falls back to mixed mode or renderer-only mode.
>>> Maybe it would be possible to start with Wayland extensions that are
>>> specific to Weston. You can cut corners there, so that you get content
>>> to feed to into Weston. At some point, the Wayland extensions need to be
>>> promoted to wayland-protocols and that is when you need more buy-in
>>> from the community, and where the protocol design needs to be
>>> well-thought. OTOH, the unstable protocols in wayland-protocols are
>>> supposed to be similarly WIP, so maybe there is no need merge
>>> weston-specific extensions.
>> If you can elaborate this input a bit further, we can very well think
>> about this.
> I just meant that if you want to experiment with Weston internals
> sooner rather than later, it is ok to use weston-specific protocol
> extensions if the extensions meant for wayland-protocols
> standardisation are stuck in discussions. If you do that however, you
> need to plan to migrate to the wayland-protocols extensions when they
> have landed and drop the weston-specific extensions.
> This means that you need to avoid interface name conflicts between
> weston and wayland-protocols extensions.
Got it, I think you are suggesting something like the way how we 
implemented aspect-ratio protocol in weston.
- Shashank
> Thanks,
> pq

More information about the wayland-devel mailing list