[Feature request] Multiple X servers on one graphics card?

Prof. Dr. Klaus Kusche klaus.kusche at computerix.info
Tue Aug 2 12:11:53 PDT 2011


On 2011-08-02 17:48, Alex Deucher wrote:
> On Tue, Aug 2, 2011 at 11:28 AM, Prof. Dr. Klaus Kusche
> <klaus.kusche at computerix.info>  wrote:
>> On 2011-08-02 16:34, Alex Deucher wrote:
>>>
>>> On Tue, Aug 2, 2011 at 10:22 AM, Prof. Dr. Klaus Kusche
>>> <klaus.kusche at computerix.info>    wrote:
>>>>
>>>> On 2011-08-02 14:59, Alex Deucher wrote:
>>>>>
>>>>> On Mon, Aug 1, 2011 at 3:41 PM, Prof. Dr. Klaus Kusche
>>>>> <klaus.kusche at computerix.info>      wrote:
>>>>>>
>>>>>> Hmmm, what's about the opposite approach?
>>>>>> To me, it sounds simpler and more logical when the kernel always
>>>>>> creates
>>>>>> one device node per output (or maybe dynamically per connected output),
>>>>>> without any need for configuration or device assignment.
>>>>>
>>>>> You almost always have more connectors than display controllers (e.g.,
>>>>> you might have displayport, svideo, DVI-I and VGA, but only two
>>>>> display controllers so you can only use two of the connectors at any
>>>>> time).  Also certain combinations of connectors are not possible
>>>>> depending on the hw (e.g., the svideo and the VGA port may share the
>>>>> same DAC, so you can only use one or the other at the same time).
>>>>
>>>> Hmmm, for my purposes I was only thinking about new, current hardware,
>>>> not about previous-generation cards, and only about digital outputs:
>>>>
>>>> * The professional, high-quality solution would be ATI's FirePro 2460:
>>>> 4 mini Displayports, all active at the same time, single slot
>>>> (passive cooling,<    20 W, so that's a great energy saver, too,
>>>> competing with thin and zero clients,
>>>> and it's silent and long-lived)
>>>>
>>>> * The XFX HD-677X-Z5F3 most likely offers most ports per Euro and space:
>>>> 5 mini Displayports, all active at the same time, single slot,
>>>> for less than 100 Euro
>>>>
>>>> (this would result in 16/20 seats with any quad-crossfire mainboard
>>>> and 28/35 seats with some server mainboards if the BIOS is able
>>>> to assign addresses to 7 graphics cards)
>>>>
>>>> Even the low-cost 6450 supports 3 and the 6570 supports 4
>>>> independent simultaneous outputs, so any ATI 6xxx card can drive
>>>> all its outputs at the same time
>>>> (and I believe that was also true for ATI 5xxx)
>>>> However, cards with 3 or 4 digital outputs are hard to find
>>>> in that price range... (XFX HD6570 is one of them)
>>>>
>>>> But you're correct, my suggestion above needs to be refined:
>>>> One DRI device per display controller.
>>>
>>> Even then it gets a little tricky.  AMD cards are fairly flexible, but
>>> some other cards may have restrictions about which encoders can be
>>> driven by which display controllers.  Then how do you decide which
>>> display controller gets assigned to which connector(s)?  You also need
>>> to factor in things like memory bandwidth.  E.g., a low end card may
>>> not be able to drive four huge displays properly, but can drive four
>>> smaller displays.
>>
>> What is your suggestion to "do things right"?
>> How would you assign DRI device nodes to multiple monitors?
>> Do you have better suggestions for building multi-seat systems
>> beyond 4 seats with 4 single-output cards?
>>
>> How does xrandr currently solve those problems?
>> It might also "see" more outputs than there are display controllers,
>> it has the same job of assigning connectors to display controllers,
>> and it also has the problem that setting all outputs to their
>> maximum resolution might cause the card to run out of memory bandwidth.
>> So either the logic needed is already there,
>> or the problems are not multiseat-specific,
>> but affect today's multi-screen environments in general.
>>
>> I think there is no need to do better than xrandr currently does.
>> In fact, that's the multiseat solution we have today:
>> Configure one X server (most likely using xrandr) with one huge display
>> spanning all the outputs and monitors connected to one card,
>> and start one Xephyr per monitor and user within that display.
>> This just lacks any acceleration and Xv.
>>
>> (the fact that xrandr already seems to handle most of this
>> was one of the reasons why I suggested that the kernel should just
>> export every output the hardware offers to userland: I believed
>> that the userland already knows how to allocate and configure outputs,
>> and the only thing missing is the ability to access the same card
>> from more than one X server or to assign the outputs of one card
>> to two or more X servers)
>
> Some drivers can already do this (the radeon driver at least).  google
> for zaphod mode.  You basically start one instance of the driver for
> each display controller and then assign which randr outputs are used
> by each instance of the driver.  It already works and supports
> acceleration.  The problem is, each instance shows up as an X protocol
> screen rather than a separate X server so, you'd have to fix the input
> side.

Hmmmm...
* Zaphod seems to have even fewer active users and less future/support
than Xephyr, so putting work into it doesn't seem to be a future-proof
investment.
* It would be of interest only if it were possible to configure two
zaphod drivers assigned to two different outputs (but the same PCI Id!)
in two *different* X servers, but I'm quite sure that's not supported...

>> I also believe and accept that there will be no solution
>> supporting all graphics cards existing today and 10 years back.
>> Only some cards offer KMS, only some cards offer 3D acceleration,
>> some older cards don't even offer dual-screen support for one X server,
>> only some cards will offer multi-seat support in future.
>> If somebody wants to build a high-density shared-card multiseat system,
>> he has to choose suitable hardware.
>
> Even if you only support KMS supported hardware which seems reasonable
> to me, you still have a lot of cards out there with interesting
> display configurations.  We still make cards with DVI and VGA ports on
> them or more connectors than display controllers.  You don't really
> want to go through the whole effort if it only works on a handful of
> special cards.  It wouldn't take that much more work to come up with
> something suitable for the majority of hardware that is out there.  In
> most cases, it will probably be a custom configuration for each
> person, so as long as the right knobs are there, you can configure
> something that works for your particular system.

Any idea what "something suitable" could be?
What the missing "right knobs" are?

Back to the beginning of the discussion:
The primary interest is not how to configure outputs.
xrandr already does that, and that should be used, not duplicated.
We only want what xrandr is able to do today (at most), not more.
However, we want it for more than one X server.

The central question is:
How do two or more X servers share access to a single graphics engine?
The second question is:
How do xrandr outputs get assigned to X servers such that each server
gets exclusive access to its outputs?
And the third item on the todo list is perhaps tightening security
and server/user separation...

Airlied's prototype implementation was a working demonstration
for the first item (just for radeon).
His suggestion for the second question was purely kernel-based
(using configfs). If I understood it correctly:
* For each card, first configure the number of DRM devices
you want to have (one per X server).
* Then, assign xrandr outputs to these devices.

This way, each X server opening its render device should only "see"
the outputs assigned to this device.

Is this agreed?
Any alternatives?

Basically, I think multiseat configurations will be static in most
cases - after all, a multiseat configuration usually has
quite a cabling mess, with a fixed number of monitors,
each having a keyboard and mouse, which are statically mapped
to fixed evdev devices using udev. Re-cabling is an effort anyway,
so editing a config file in this case would be acceptable (after all,
most existing multi-Xephyr solutions are also statically configured).
Hence several xorg.conf selected with -config or one large xorg.conf
with several layouts selected by -layout will suffice,
each specifying input devices, a graphics card and an xrandr output
similar to zaphod mode.

So what would be needed to make that work?

If someone wants dynamic configuration without xorg.conf,
I think the only thing needed is some bookkeeping in the kernel
which server is using which xrandr output:
* If a server is started without specific configuration,
it just grabs the next available output
(or all unused outputs on that card?).
* If a server activates an unused output, this output should
be assigned to him exclusively until disabled.
* If a server tries to activate an output already in use by
another server, it should get an error.
* If a server disables an output, this output becomes available
to other servers.

What would be needed for that? Is the information about enabled
and disabled outputs currently stored in the kernel or in userland?


Klaus.

-- 
Prof. Dr. Klaus Kusche
Private address: Rainstraße 9/1, 88316 Isny, Germany
+49 7562 6211377 Klaus.Kusche at computerix.info http://www.computerix.info
Office address: NTA Isny gGmbH, Seidenstraße 12-35, 88316 Isny, Germany
+49 7562 9707 36 kusche at nta-isny.de http://www.nta-isny.de



More information about the dri-devel mailing list