VGA arbitration: API proposal

Benjamin Herrenschmidt benh at kernel.crashing.org
Fri Mar 4 15:44:25 PST 2005


I've though a lot about it, it's nasty...

The problem is that as soon as a card decodes legacy addresses, it must
not be left enabled for IO/MEM accesses when another card needs to do
VGA things.

I think what we need is an arbitrer with the following kind of APIs
(which could be built on top of some sysfs manipulations & kenrel
support or hidden in a library if kernel is out of the loop) :

 - vga_set_legacy_decoding(card, io_state, mem_state);

   This one would inform the arbitrer that your card is decoding VGA
accesses or not. It should be called by the driver, by default, the
arbitrer assumes all cards are decoding VGA addresses. I need to double
check what is needed for radeon to stop decoding VGA accesses, I'm not
100% sure about that. CRTC_EXT_DISP_EN plus some other bit in DAC_CNTL
may be enough, but we should make sure of that. I think nVidia's can
remap the VGA IOs to some other place in PCI space with real decoding,
so they become a non-issue.

   We should probably separate VGA IO and VGA Memory decoding, since
some cards may be able to disable the later but not the former, but
since a lot of cards can be used entirely without doing any IO, that can
be dealt with (like radeons: you can entirely operate a radeon without
using any IO access) in which case we can leave the IOs disabled in the
PCI cmd register.

 - cookie = vga_get(card, io/mem);

   Request VGA IO and/or mem resources. Triggers disabling of other
cards that have decoding of these enabled.

 - vga_put(cookie);

   Release.

I don't think we need a callback to "clients" to inform them that they
are losing VGA access, this would be to complicated to deal with as
client can be separate processes and/or kernel land.

For this to work, any VGA access must be bracketed by vga_get/vga_put.
The implementation can be "lazy" (the actual cmd register switching is
only done where an actual change of ownership is needed).

That also puts some restrictions on interrupts since I expect the above
API to be only callable from normal process context (or it's impossible
to deal with userland clients properly). It's not suitable to be called
from an interrupt since vga_get may fail or block (to be decided, maybe
we can have 2 versions...)

Typically, a driver could only enable IRQ generation on the card in on
those circumstances:

 - It does have VGA decoding enabled (it called vga_set_legacy_decoding
with no IO and no MEM decoding set). It basically puts the card out of
the arbitration domain.

 - It does have VGA decoding enabled only for IO (decoding of the VGA
memory aperture is disabled, that is it called vga_set_legacy_decoding
with only IO set) and the interrupt handler doesn't need to do IO
accesses. That is, it can afford to get interrupts while IO accesses are
disabled in the config space (but not memory accesses).

 - It hold the VGA semaphore (vga_get) but I don't recommend this
scenario unless the driver knows the interrupt will come very soon
(maybe suitable for vblank waiting).

Note that a driver that used vga_set_legacy_decoding(0,0) can still use
vga_get()/vga_put() in case it needs to temporarily re-enable it, for
things like POSTing. It must just make sure it re-disables it on the
card before vga_put().
 
Egbert, Jon, any comment ?

It could be implemented as a userland library for "default" OSes, that
library using kernel facilities (to be written) on OSes that provide
them (linux).

As far as Linux is concerned, the toggling of VGA access with an
in-kernel semaphore to be shared between userland toggle (via sysfs) and
internal drivers (vgacon) has to be done, along with some indication (in
sysfs too) of the decoding mode of the card as set by the driver (could
be set by the kernel driver or by X, I suppose if they have different
settings, they can switch it at VT switch time).

Ben.

 




More information about the xorg mailing list