PCI rework
Egbert Eich
eich at suse.de
Fri May 5 09:06:48 PDT 2006
Jesse Barnes writes:
> On Sunday, April 30, 2006 12:52 pm, Mark Kettenis wrote:
> > I'm talking specifically about pci_device_map_region() and
> > pci_device_unmap_region(). These interfaces are clearly there
> > because the Linux sysfs provides these "regions" as files which you
> > can open and then mmap. Other operating systems don't support this
> > view, and there's a good reason not to do it: Many PCI devices put
> > mappable memory addresses in config space outside the standard PCI
> > BARs.
>
> You mean for ISA legacy port space? Yes, that's another area that needs
> addressing (I mentioned it in the early threads about libpciaccess),
> but beyond that I'm not sure what you mean?
I know of a single PCI device - an S3 card - that was utilized beyond its
specs by one vendor.
I'm sure that it is not supported by anything later than XFree86 3.3 as
the only person who knows about that never ported the XFree86 3.3 driver
over to a later version.
(Had I thought of this ano hour ago - I would have asked the guy.
He was with me here on the show).
Apart from this I only know about the legacy ranges in mem and io
space. In io space these are the VGA ranges and the ..... 8514 spares
ranges.
>
> > What you really need is an interface to read from PCI config
> > space and an interface to map physicall addresses on the pci bus into
> > memory. Some standard helper functions to decode the standard BARs
> > is certainly desirable.
>
> In my experience that's the wrong interface. Reading PCI config space
> isn't enough to tell you what or where memory should be mapped. Only
> the OS knows enough to actually map things correctly, e.g. in the case
> of PCI bridges of various types (host->pci, pci->pci, etc.). Hiding
> this knowledge behind a library (ideally one that talks to the OS) is
> the only way to go.
Yeah, we did this mistake when we implemented that code years ago.
We looked at the PC and did everything needed there. Later on we discovered
that others had different requirements - a lot of the kludges that people
complain about this thread are due to this. The first platform we learned
this on was AXP. Others were to follow but AXP was really the craziest.
>
> > > Note that the prototype is different in that it takes a full pci
> > > info structure rather than just a tag. This gives arch specific
> > > implementations more flexibility and eases porting.
> >
> > But a PCITAG is already opaque; there's no reason why you could
> > extend it to include the additional information you might need.
>
> It's true that arch implementations could use PCITAG as some sort of
> mapping key, but passing a whole structure makes things easier since
> you can just have an arch specific void * containing an arch specific
> structure.
>
> > > I disagree with this, the xf86Pci interface is pretty screwy:
> > > o X does things with PCI devices it has no business doing (e.g.
> > > remapping BARs)
> >
> > Remapping as in writing different addresses into them? X might need
> > to do that if the firmware doesn't properly initialize them. I've
> > seen many, many buggy firmware implementations (ok they're mostly
> > BIOSes) that don't do this properly.
>
> X shouldn't be working around buggy BIOSes. If this knowledge belongs
> anywhere, it's in the OS (or at the very least some sort of fairly
> centralized library).
Yes in a perfect world. But not all worlds are perfect (yet).
>
> > > o the distinction between mapping domain and regular PCI memory
> > > is arbitrary and should be removed
> >
> > I'm not quite sure what is actually meant by a "domain". I'm
> > presuming it's similar to what the ACPI specification calls a
> > "segment": a completely seperate PCI bus hierarchy. Yes, the way the
> > current interface handles that is awkward, but it should be easy to
> > fix this if you add the domain to the PCITAG.
>
> I'm talking about xf86MapDomainMemory vs. xf86MapPciMem, but it sounds
> like we agree that it's a silly distinction. PCI domains (or segments
> in ACPI as you mention) need to be dealt with on a high level.
Yes, it does. I don't know the reason behind this distinction - other than
the laziness to fix the former interface and all its consumers.
>
> > > o the PCI device discovery code needed by drivers is
> > > unnecessarily complex
> >
> > I'm not so sure about that. Some amount of complexity will be needed
> > to deal with badly designed or buggy hardware and firmware. Most of
> > these issues will be specific to particular PCI hardware. Shoving
> > those into the domain of a seperate library, or the further down into
> > the operating system isn't a solution.
>
> Well, fundamentally a driver wants to bond to a particular PCI device
> (or class of devices). The current code makes that more difficult than
> it needs to be (though it's not that bad I suppose).
Well, there are certain provisions to persuade the code to use a different
PCI id than the device has - these things are there to perusade the driver
to run on newer generation of hardware.
This has been useful in some cases.
>
> > > o ROM mapping is hard to port and buggy in some cases
> >
> > By its very nature ROM's are unportable. I really can't see how
> > libpciaccess would alleviate that situation. The X hardware drivers
> > really should try to avoid depending on ROMs.
>
> I agree with the last statement, but ROMs can be supported with
> emulation, and for many devices that's the only way to make them usable
> (i.e. if you don't POST them with a ROM you can't really program them
> at all). So getting at the ROMs in a portable way is an important
> feature for non-x86 platforms.
There are numerous reasons why you want the ROM content - we are living
in an imperfect world.
Also the ROM is the only source of information on the details of the hardware
on a system (for example memory size, layout, limits etc.)
That's why we use the ROM to POST secondary cards - and don't try to
do it ourselves.
>
> > There are not many applications that need a PCI abstraction layer.
> > Apart from X and some debugging tools to look at PCI config space I
> > can't really think of any applications that need them.
>
> Well, they're out there. I don't really like the idea of userspace
> drivers in general, but for certain devices a userspace driver is the
> path of least resistance. And there are quite a few of them out there,
> mostly used for special purpose applications.
>
> > The debugging
> > tools used on Linux, pciutils, already come with their own
> > abstraction layer. So libpciaccess was basically developed just for
> > X, and is unlikely to be used for any other software packages. I
> > really don't see your point, especially since adapting them requires
> > changes to almost all X drivers, even those that don't abuse the
> > current PCI abstraction layer.
>
> Yeah, it's a shame that Ian had to write his own, but pciutils doesn't
> have a license suitable for many projects...
I think we need to go far beyond what pciutils does.
Ciao,
Egbert.
More information about the xorg
mailing list