DRI gurus,
If I'm not mistaken, the current Linux graphics stack is as follows (excluding Wayland protocol and LLVM or GLAMOR-based approaches):
X11/OpenGL app -> libX/Mesa -> DDX driver/Mesa DRI module -> kernel DRM -> hardware
What's unclear to me is, in the case of an AGP graphics adapter, where does the AGP GART takes place in this stack (if applicable)?
Say I have an AGP ATI R300-based graphics adapter. In the above stack, DDX driver is x86-video-ati, Mesa DRI module is r300 (Classic or Gallium3D) and kernel DRM is radeon. (Am I still right?)
Obviously, this AGP graphics adapter nevertheless works flawlessly without AGP GART compiled in kernel or as module. This is at least true for the open source stack, I've tested it. Is my AGP graphics adapter thus running in what's known as PCI/PCIe mode? I've read all the AGP scatter/gather, texturing and fast writes things, but I can't see any difference performance-wise between having AGP GART compiled in kernel or as a module and no AGP GART. Is it because my usage doesn't stress the graphics subsystem enough or is it because PCI/PCIe mode is so amazing that AGP GART doesn't provide any performance enhancements? AGP GART however provides me nice stability issues ;-)
When compiled in kernel or as a module, is AGP GART only used for 3D hardware acceleration by the r300 Mesa DRI module (or is it by the radeon DRM? Or both?) or also by the xf86-video-ati DDX driver for XAA/EXA acceleration? And what about video acceleration?
What happens when the AGP GART isn't compiled in kernel or as a module? Is it simply a matter of skipping a participant (the AGP GART) in the graphics stack or are there different code paths in the DDX driver, Mesa DRI module and/or kernel DRM depending upon the availability of AGP GART or not?
Is the code path the same in the following situations: - no AGP GART at all; - AGP GART compiled in kernel or as a module but "options radeon agpmode=-1" set in /etc/modprobe.d/radeon-kms.conf.
Is setting a different AGP mode (1x, 2x, 4x, 8x) in /etc/modprobe.d/radeon-kms.conf only a hardware thing or are there different code paths taken in the various components of the graphics stack depending on the current AGP mode?
What happens if you compile AGP GART in kernel or as a module with a PCI/PCIe graphics adapter? Is it simply ignored? How? Out of Linux control at the hardware level or are there simply no code path taking advantages of the AGP GART in a PCI/PCIe graphics stack?
Finally is this assertion of the "radeon-KMS with AGP gfxcards" section of the radeonBuildHowTo [1] still true?
"AGP gfxcards have a lot of problems so if you have one it is good idea to test PCI/PCIE mode using radeon.agpmode=-1."
Thanks,
Émeric
On 20 June 2014 03:17, Émeric MASCHINO emeric.maschino@gmail.com wrote:
DRI gurus,
If I'm not mistaken, the current Linux graphics stack is as follows (excluding Wayland protocol and LLVM or GLAMOR-based approaches):
X11/OpenGL app -> libX/Mesa -> DDX driver/Mesa DRI module -> kernel DRM -> hardware
What's unclear to me is, in the case of an AGP graphics adapter, where does the AGP GART takes place in this stack (if applicable)?
AGP support is just by the kernel drivers now with KMS.
AGP is just a GART that sits on the chipset side of the AGP port, along with faster lanes speed than plain PCI and wierd enhancement like fastwrite.
With early PCI GPUs they couldn't access data in main memory like textures you had to DMA stuff to the GPU. Some GPUs got PCI GARTs which were very simple lookup tables from GPU linear to host page address, however these suffered from lots of bandwidth issues when updating them, so AGP put a GART on the chipset side to do the same. PCIE uses GARTs back on the GPU side.
So to run in AGP mode you need a chipset specific driver to manage the chipsets AGP GART and other features, that the GPU drivers can talk to.
Obviously, this AGP graphics adapter nevertheless works flawlessly without AGP GART compiled in kernel or as module. This is at least true for the open source stack, I've tested it. Is my AGP graphics adapter thus running in what's known as PCI/PCIe mode? I've read all the AGP scatter/gather, texturing and fast writes things, but I can't see any difference performance-wise between having AGP GART compiled in kernel or as a module and no AGP GART. Is it because my usage doesn't stress the graphics subsystem enough or is it because PCI/PCIe mode is so amazing that AGP GART doesn't provide any performance enhancements? AGP GART however provides me nice stability issues ;-)
I'm not sure how best to measure a speed difference, it should be noticable with games and stuff, maybe not with plain desktop usage, anything that up or downloads lots of stuff or uses main RAM for textures.
Dave.
2014-06-20 2:06 GMT+02:00 Dave Airlie airlied@gmail.com:
So to run in AGP mode you need a chipset specific driver to manage the chipsets AGP GART and other features, that the GPU drivers can talk to.
Do the GPU drivers then talk differently to the graphics card when there's a GART? I mean, are there different code paths in the GPU drivers depending on the presence of GART or not? Or is the command stream built by the DRI module exactly the same with or without GART? Next, in the kernel DRM, are there different code paths taken depending on the presence of GART or not? Or is it something "transparently" (from the DRI module/kernel DRM point of view) managed at the hardware level: I mean, the exact same data are sent to the graphics card through the DRI module and kernel DRM, but depending on the presence of GART or not, the data aren't handled the same at the hardware level and are "intercepted" and managed differently (e.g. reorganized) by the GART before being effectively delivered to the graphics adapter?
I'm not sure how best to measure a speed difference, it should be noticable with games and stuff, maybe not with plain desktop usage, anything that up or downloads lots of stuff or uses main RAM for textures.
So, my OpenSceneGraph datasets, old Quake II/III demos and SPECviewperf 7.1.1 benchmark are too limited nowadays ;-) But they were contemporary to my hardware, so should be representative of this era.
Thanks Dave for this first batch of explanations,
Émeric
On 20 June 2014 18:27, Émeric MASCHINO emeric.maschino@gmail.com wrote:
2014-06-20 2:06 GMT+02:00 Dave Airlie airlied@gmail.com:
So to run in AGP mode you need a chipset specific driver to manage the chipsets AGP GART and other features, that the GPU drivers can talk to.
Do the GPU drivers then talk differently to the graphics card when there's a GART? I mean, are there different code paths in the GPU drivers depending on the presence of GART or not? Or is the command stream built by the DRI module exactly the same with or without GART?
The userspace command streams are the same, the kernel driver is all that contains differences,
in the radeon driver look for RADEON_IS_AGP, to see the differences.
Next, in the kernel DRM, are there different code paths taken depending on the presence of GART or not? Or is it something "transparently" (from the DRI module/kernel DRM point of view) managed at the hardware level: I mean, the exact same data are sent to the graphics card through the DRI module and kernel DRM, but depending on the presence of GART or not, the data aren't handled the same at the hardware level and are "intercepted" and managed differently (e.g. reorganized) by the GART before being effectively delivered to the graphics adapter?
The GPUs have an internal memory layout, all the GART does is allow it to see scatter/gather objects in main RAM as linear objects in its address space, so its only about how the GPU accesses objects, and how the CPU sets things up,
Nothing else is affected like shaders or command streams.
booting with radeon.benchmark=1 might show some.
Dave.
dri-devel@lists.freedesktop.org