AGP GART clarifications, please!

Émeric MASCHINO emeric.maschino at gmail.com
Thu Jun 19 10:17:32 PDT 2014


DRI gurus,

If I'm not mistaken, the current Linux graphics stack is as follows
(excluding Wayland protocol and LLVM or GLAMOR-based approaches):

X11/OpenGL app -> libX/Mesa -> DDX driver/Mesa DRI module -> kernel
DRM -> hardware

What's unclear to me is, in the case of an AGP graphics adapter, where
does the AGP GART takes place in this stack (if applicable)?

Say I have an AGP ATI R300-based graphics adapter. In the above stack,
DDX driver is x86-video-ati, Mesa DRI module is r300 (Classic or
Gallium3D) and kernel DRM is radeon. (Am I still right?)

Obviously, this AGP graphics adapter nevertheless works flawlessly
without AGP GART compiled in kernel or as module. This is at least
true for the open source stack, I've tested it. Is my AGP graphics
adapter thus running in what's known as PCI/PCIe mode? I've read all
the AGP scatter/gather, texturing and fast writes things, but I can't
see any difference performance-wise between having AGP GART compiled
in kernel or as a module and no AGP GART. Is it because my usage
doesn't stress the graphics subsystem enough or is it because PCI/PCIe
mode is so amazing that AGP GART doesn't provide any performance
enhancements? AGP GART however provides me nice stability issues ;-)

When compiled in kernel or as a module, is AGP GART only used for 3D
hardware acceleration by the r300 Mesa DRI module (or is it by the
radeon DRM? Or both?) or also by the xf86-video-ati DDX driver for
XAA/EXA acceleration? And what about video acceleration?

What happens when the AGP GART isn't compiled in kernel or as a
module? Is it simply a matter of skipping a participant (the AGP GART)
in the graphics stack or are there different code paths in the DDX
driver, Mesa DRI module and/or kernel DRM depending upon the
availability of AGP GART or not?

Is the code path the same in the following situations:
- no AGP GART at all;
- AGP GART compiled in kernel or as a module but "options radeon
agpmode=-1" set in /etc/modprobe.d/radeon-kms.conf.

Is setting a different AGP mode (1x, 2x, 4x, 8x) in
/etc/modprobe.d/radeon-kms.conf only a hardware thing or are there
different code paths taken in the various components of the graphics
stack depending on the current AGP mode?

What happens if you compile AGP GART in kernel or as a module with a
PCI/PCIe graphics adapter? Is it simply ignored? How? Out of Linux
control at the hardware level or are there simply no code path taking
advantages of the AGP GART in a PCI/PCIe graphics stack?

Finally is this assertion of the "radeon-KMS with AGP gfxcards"
section of the radeonBuildHowTo [1] still true?

"AGP gfxcards have a lot of problems so if you have one it is good
idea to test PCI/PCIE mode using radeon.agpmode=-1."

Thanks,

     Émeric


[1] http://www.x.org/wiki/radeonBuildHowTo/


More information about the dri-devel mailing list