[Feature request] Multiple X servers on one graphics card?
e0425955 at student.tuwien.ac.at
Tue Aug 2 08:43:13 PDT 2011
On 08/01/2011 10:22 PM, Alan Cox wrote:
> On Mon, 1 Aug 2011 20:47:42 +0100
> Dave Airlie <airlied at gmail.com> wrote:
>>> Hmmm, what's about the opposite approach?
>>> To me, it sounds simpler and more logical when the kernel always creates
>>> one device node per output (or maybe dynamically per connected output),
>>> without any need for configuration or device assignment.
>> It just doesn't fit in with how the drm device nodes work, like it might seem
>> simpler in the kernel but I think it would just complicate userspace.
> It also doesn't fit some cases of reality (eg the USB displaylink stuff)
> where the output and the GPU are effectively decoupled.
> There are also some interesting security issues with a lot of GPUs where
> you'd be very very hard pushed to stop one task spying on the display of
> another as there isn't much in the way of MMU contexts on the GPU side.
Actually, >= GeForce8 have proper (and working) virtual memory, i.e.
per-context page directory and page tables.
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
More information about the dri-devel