4K at 60 YCbCr420 missing mode in usermode

Emil Velikov emil.l.velikov at gmail.com
Tue Jun 26 17:11:12 UTC 2018


On 26 June 2018 at 17:23, Michel Dänzer <michel at daenzer.net> wrote:
> On 2018-06-26 05:43 PM, Emil Velikov wrote:
>> Hi Jerry,
>>
>> On 25 June 2018 at 22:45, Zuo, Jerry <Jerry.Zuo at amd.com> wrote:
>>> Hello all:
>>>
>>>
>>>
>>> We are working on an issue affecting 4K at 60 HDMI display not to light up, but
>>> only showing up 4K at 30 from:
>>> https://bugs.freedesktop.org/show_bug.cgi?id=106959 and others.
>>>
>>>
>>>
>>> Some displays (e.g., ASUS PA328) HDMI port shows YCbCr420 CEA extension
>>> block with 4K at 60 supported. Such HDMI 4K at 60 is not real HDMI 2.0, but still
>>> following HDMI 1.4 spec. with maximum TMDS clock of 300MHz instead of
>>> 600MHz.
>>>
>>> To get such 4K at 60 supported, it needs to limit the bandwidth by reducing the
>>> color space to YCbCr420 only. We’ve already raised YCbCr420 only flag
>>> (attached patch) from kernel side to pass the mode validation, and expose it
>>> to user space.
>>>
>>>
>>>
>>>     We think that one of the issues that causes this problem is due to
>>> usermode pruning the 4K at 60 mode from the modelist (attached Xorg.0.log). It
>>> seems like when usermode receives all the modes, it doesn't take in account
>>> the 4K at 60 YCbCr4:2:0 specific mode. In order to pass validation of being
>>> added to usermode modelist, its pixel clk needs to be divided by 2 so that
>>> it won't exceed TMDS max physical pixel clk (300MHz). That might explain the
>>> difference in modes between our usermode and modeset.
>>>
>>>
>>>
>>>     Such YCbCr4:2:0 4K at 60 special mode is marked in DRM by raising a flag
>>> (y420_vdb_modes) inside connector's display_info which can be seen in
>>> do_y420vdb_modes(). Usermode could rely on that flag to pick up such mode
>>> and halve the required pclk to prevent such mode getting pruned out.
>>>
>>>
>>>
>>> We were hoping for someone helps to look at it from usermode perspective.
>>> Thanks a lot.
>>>
>> Just some observations, while going through some coffee. Take them
>> with a pinch of salt.
>>
>> Currently the kernel edid parser (in drm core) handles the
>> EXT_VIDEO_DATA_BLOCK_420 extended block.
>> Additionally, the kernel allows such modes only as the (per connector)
>> ycbcr_420_allowed bool is set by the driver.
>>
>> Quick look shows that it's only enabled by i915 on gen10 && geminilake hardware.
>>
>> At the same time, X does it's own fairly partial edid parsing and
>> doesn't handle any(?) extended blocks.
>>
>> One solution is to update the X parser, although it seems like a
>> endless game of cat and mouse.
>> IMHO a much better approach is to not use edid codepaths for KMS
>> drivers (where AMDGPU is one).
>> On those, the supported modes is advertised by the kernel module via
>> drmModeGetConnector.
>
> We are getting the modes from the kernel; the issue is they are then
> pruned (presumably by xf86ProbeOutputModes => xf86ValidateModesClocks)
> due to violating the clock limits, as described by Jerry above.
>
Might have been too brief there. Here goes a more elaborate
suggestion, please point out any misunderstandings.

If we look into the drivers we'll see a call to xf86InterpretEDID(),
followed by xf86OutputSetEDID().
The former doing a partial parsing of the edid, creating a xf86MonPtr
(timings information et al.) with the latter attaching it to the
output.

Thus as we get into xf86ProbeOutputModes/xf86ValidateModesClocks the
Xserver probes the mode against given timing/bandwidth constrains,
discarding it where applicable.

Considering that the DRM driver already does similar checks, X could
side-step the parsing and filtering/validation all together.
Trusting the kernel should be reasonable, considering weston (and I
would imagine other wayland compositors) already do so.

Obviously, manually added modelines (via a config file) would still
need to be validated.

Thanks
Emil


More information about the xorg-devel mailing list