VGA Arbiter

Benjamin Herrenschmidt benh at
Fri Oct 26 21:18:01 PDT 2007

On Fri, 2007-10-26 at 19:39 -0700, Keith Packard wrote:
> On Sat, 2007-10-27 at 11:35 +1000, Benjamin Herrenschmidt wrote:
> > The interface was something I quickly threw together, it doesn't
> > necessarily have to be that way. In fact, I wonder if a sysfs file might
> > be better. Anyway, it's good to have a proof of concept to work from.
> Does this have to be visible from user mode at all? With kernel mode
> modesetting and graphics access, would it not be better to just provide
> this as a library to the internal graphics drivers and avoid exposing it
> to user mode?
> > This is important because you cannot reliably use interrupts for
> > example, like the DRM does on these, if your MMIO decoding can be
> > switched off by another cards driver trying to get to its own legacy IOs
> > or memory space.
> With the arbitrartion handled from within the kernel, I'm wondering if
> we can't make even interrupts work...

So I had a look at the code I did back then and I did indeed add a
vga_tryget() function that could be used at interrupt time. If it fails,
the driver would have to disable_irq(), fire a workqueue, and do a
proper vga_get from there. This can be pretty bad if the interrupt is
masked for a while as it can be shared with other devices.

So yes, you can, but it's really better to just remove yourself out of
the arbitration if you can disable decoding of legacy regions on your
card during normal arbitration.

That is what the vga_set_legacy_decoding() call does (or the "decodes"
part of the user API).

Another things that comes to mind, which might conflict with the
modularity of the whole thing, is we need to deal with the text console
which probably means hooking something like vga_get/put(NULL) in the VT
code when switching in/out KD_TEXT.


More information about the xorg mailing list