How do we want to deal with 4k tiled displays?

Alexander E. Patrakov patrakov at gmail.com
Wed Jan 22 20:24:28 PST 2014


2014/1/23 Keith Packard <keithp at keithp.com>:
> Andy Ritger <aritger at nvidia.com> writes:
>
>> Keith, did you mean to say "driver" or "X server"?
>
> Well, I meant to say 'driver', but I can see reasons for wanting to at
> least have some code in hw/xfree86/modes that could help out. In any
> case, definitely within the X server, but beyond that I'd say we should
> make as much code common as possible.
>
>> The case of connectors sharing DP lanes seems like a hardware-specific
>> constraint best arbitrated within the hardware's driver.  But these tiled
>> monitors requiring multiple CRTCs+outputs doesn't seem hardware-specific,
>> so arguably doesn't belong in a hardware-specific driver.  At the least,
>> it would be unfortunate if each driver chose to solve this configuration
>> differently.
>
> Right, which is where a helper function might be a better solution, in
> case the driver did want to do things differently.

What would be the case where the driver would want to do things differently?

>> But, even if we hide the two tiles within the X server, rather than
>> within drivers, there would be behavioral quirks.  E.g.,
>>
>> * The EDID of each tile indicates the modetimings for that tile (as well
>>   as the physical position of the tile within the whole monitor).  When we
>>   provide the EDID to RandR clients through the EDID output property,
>>   which tile's EDID should we provide?  Or should we construct a fake
>>   EDID that describes the combined resolution?  Maybe in practice no
>>   RandR clients care about this information.
>
> Interesting. Sounds like we have three choices:
>
>  1) Report both EDIDs, presumably using some new convention
>  2) Construct a fake unified EDID
>  3) Don't report EDID at all
>
> Obviously 3) is the easiest :-)

(3) breaks color managers, because they rely upon the EDID to
associate color profiles with outputs.

What's wrong with my proposal to report "mirrored screens" to clients
even though the outputs are not really mirrors? In this case, each
mirror can get one EDID, implementing option (1).

>> * How should hotplug of the monitor's second tile be handled by the
>>   server if it is hiding the two tiles?  Should such a hotplug generate
>>   a connected event to RandR clients?  Maybe a hotplug on either tile
>>   gets reported as a connect event on the one API-facing output.
>
> Presumably you'd only want to report 'connected' if both wires were
> hooked up? Otherwise, the monitor isn't really useful.

In the pathological sitiation when only one wire is hooked up, I guess
that the monitor is still usable at its original size and aspect
ratio, just not at full resolution. Someone has to verify this. If
this is so, then "unplug one 1920x1080 monitors, plug two mirrored 4K
monitors" sounds more appropriate. Or, if the fact that the other half
exists is detectable from EDID, we can indeed not support single-wire
operation.

>
>> Also, there are a variety of other scenarios where one large virtual
>> monitor has multiple tiles of input: powerwalls, multi-projector
>> setups, etc.  In these cases, users already complain about windows
>> getting maximized to only one output, or fullscreen applications only
>> setting a mode on one of the outputs.
>
> Which is why we must synthesize a single output.

+1, but I repeat that mirrors would work here, too, and nicely account
for busy CRTCs.

>> Those installations are admittedly niche and generally have savvy
>> administrators who can beat a configuration into submission, while the
>> tiled 4k monitors are coming to the average user.  Still, it seems like
>> both tiled 4k monitors and powerwalls present the same general problem,
>> so it would be nice if we can solve them with one general solution.
>>
>> I think I lean slightly towards trying to handle this client-side.
>
> I don't see how this will work as we have multiple RandR bindings now,
> and one (XCB) is explicitly very low-level. We'd have to interpose a new
> library into the system and convert all applications to using that. I
> think it'd be a whole lot easier to do this in the X server.
>
>> It seems like that information could be conveyed through Aaron's
>> OutputGroup idea.  Maybe that is too much detail for clients, but
>> maybe we could have a client-side library (either added to libXrandr
>> or something new) that can abstract the details from clients who prefer
>> to use that than to have full flexibility themselves.
>
> Much as core clients can't see multiple RandR outputs today, and instead
> use the screen geometry directly, I think we have to make existing RandR
> aware applications "work" reasonably with the current protocol, which
> means synthesizing a large output out of multiple smaller outputs. If
> you want to *also* extend the RandR protocol so that smarter clients can
> drill through that large output and see the individual monitors, that
> sounds like a separable problem.

+1 to "separate problem". Indeed, there is one problem of making
existing applications work without changing the protocol, and one
problem of extending the protocol so that interested clients can learn
about combined displays or make power walls on the fly.

-- 
Alexander E. Patrakov


More information about the xorg-devel mailing list