[Feature request] Multiple X servers on one graphics card?

Alan Cox alan at lxorguk.ukuu.org.uk
Mon Aug 1 13:22:56 PDT 2011

On Mon, 1 Aug 2011 20:47:42 +0100
Dave Airlie <airlied at gmail.com> wrote:

> >
> > Hmmm, what's about the opposite approach?
> > To me, it sounds simpler and more logical when the kernel always creates
> > one device node per output (or maybe dynamically per connected output),
> > without any need for configuration or device assignment.
> It just doesn't fit in with how the drm device nodes work, like it might seem
> simpler in the kernel but I think it would just complicate userspace.

It also doesn't fit some cases of reality (eg the USB displaylink stuff)
where the output and the GPU are effectively decoupled.

There are also some interesting security issues with a lot of GPUs where
you'd be very very hard pushed to stop one task spying on the display of
another as there isn't much in the way of MMU contexts on the GPU side.


More information about the dri-devel mailing list