How do we want to deal with 4k tiled displays?
aritger at nvidia.com
Wed Jan 22 16:04:58 PST 2014
On Sun, Jan 19, 2014 at 01:26:07PM -0800, Keith Packard wrote:
> Aaron Plattner <aplattner at nvidia.com> writes:
> > 1. Hide the presence of the second tile in the X server.
> > Somehow combine the two tiles into a single logical output at the RandR
> > protocol level. The X server would be responsible for setting up the right
> > configuration to drive the logical output using the correct physical
> > resources.
> This is effectively what we do with the wacky ivybridge 3-output
> setup. With that chipset, there are 4 DP lanes available and two
> connectors. If you suck up all 4 lanes for the first connector, then you
> have none left for the second one. If you've already configured the
> second one, then you can't use the higher resolution modes on the first
> one. The only way to tell is to try and see what RandR says.
> RandR is always allowed to say 'no' to any particular configuration, and
> in the tiled case, I'd suggest that the correct approach would be to
> have the driver pretend that the monitor is connected to only one of the
> outputs, and that configuring the 4k mode would require that sufficient
> resources be available to drive both physical links.
Keith, did you mean to say "driver" or "X server"?
The case of connectors sharing DP lanes seems like a hardware-specific
constraint best arbitrated within the hardware's driver. But these tiled
monitors requiring multiple CRTCs+outputs doesn't seem hardware-specific,
so arguably doesn't belong in a hardware-specific driver. At the least,
it would be unfortunate if each driver chose to solve this configuration
But, even if we hide the two tiles within the X server, rather than
within drivers, there would be behavioral quirks. E.g.,
* The EDID of each tile indicates the modetimings for that tile (as well
as the physical position of the tile within the whole monitor). When we
provide the EDID to RandR clients through the EDID output property,
which tile's EDID should we provide? Or should we construct a fake
EDID that describes the combined resolution? Maybe in practice no
RandR clients care about this information.
* How should hotplug of the monitor's second tile be handled by the
server if it is hiding the two tiles? Should such a hotplug generate
a connected event to RandR clients? Maybe a hotplug on either tile
gets reported as a connect event on the one API-facing output.
Also, there are a variety of other scenarios where one large virtual
monitor has multiple tiles of input: powerwalls, multi-projector
setups, etc. In these cases, users already complain about windows
getting maximized to only one output, or fullscreen applications only
setting a mode on one of the outputs.
Those installations are admittedly niche and generally have savvy
administrators who can beat a configuration into submission, while the
tiled 4k monitors are coming to the average user. Still, it seems like
both tiled 4k monitors and powerwalls present the same general problem,
so it would be nice if we can solve them with one general solution.
I think I lean slightly towards trying to handle this client-side.
If the goals are:
* Configuration utilities know which outputs are part of the same monitor
(and the physical orientation), to make intelligent decisions about
* Window managers know which outputs are part of the same monitor,
to make intelligent maximize and snap-to-edge behavior.
* Fullscreen applications know which outputs are part of the same monitor,
to make intelligent modesetting decisions.
It seems like that information could be conveyed through Aaron's
OutputGroup idea. Maybe that is too much detail for clients, but
maybe we could have a client-side library (either added to libXrandr
or something new) that can abstract the details from clients who prefer
to use that than to have full flexibility themselves.
Granted, clients would probably hate this idea...
> keith.packard at intel.com
More information about the xorg-devel