[RFC] Deprecate AGP GART support for Radeon/Nouveau/TTM
ckoenig.leichtzumerken at gmail.com
Fri May 22 10:41:37 UTC 2020
Am 20.05.20 um 18:18 schrieb Alex Deucher:
> On Wed, May 20, 2020 at 10:43 AM Christian König
> <ckoenig.leichtzumerken at gmail.com> wrote:
>> Am 13.05.20 um 13:03 schrieb Christian König:
>>> Unfortunately AGP is still to widely used as we could just drop support for using its GART.
>>> Not using the AGP GART also doesn't mean a loss in functionality since drivers will just fallback to the driver specific PCI GART.
>>> For now just deprecate the code and don't enable the AGP GART in TTM even when general AGP support is available.
>> So I've used an ancient system (32bit) to setup a test box for this.
>> The first GPU I could test is an RV280 (Radeon 9200 PRO) which is easily
>> 15 years old.
>> What happens in AGP mode is that glxgears shows artifacts during
>> rendering on this system.
>> In PCI mode those rendering artifacts are gone and glxgears seems to
>> draw everything correctly now.
>> Performance is obviously not comparable, cause in AGP we don't render
>> all triangles correctly.
>> The second GPU I could test is an RV630 PRO (Radeon HD 2600 PRO AGP)
>> which is more than 10 years old.
>> As far as I can tell this one works in both AGP and PCIe mode perfectly
>> Since this is only a 32bit system I couldn't really test any OpenGL game
>> that well.
>> But for glxgears switching from AGP to PCIe mode seems to result in a
>> roughly 5% performance drop.
>> The surprising reason for this is not the better TLB performance, but
>> the lack of USWC support for the PCIe GART in radeon.
>> So if anybody wants to get his hands dirty and squeeze a bit more
>> performance out of the old hardware, porting USWC from amdgpu to radeon
>> shouldn't be to much of a problem.
> We do support USWC on radeon, although I think we had separate flags
> for cached and WC. That said we had a lot of problems with WC on 32
> bit (see radeon_bo_create()). The other problem is that, at least on
> the really old radeons, the PCI gart didn't support snooped and
> unsnooped. It was always snooped. It wasn't until pcie that the gart
> hw got support for both. For AGP, the expectation was that AGP
> provided the uncached memory.
Oh, indeed. I didn't remembered that.
Interesting is that in this case I have no idea where the performance
difference is coming from.
>> Summing it up I'm still leaning towards disabling AGP completely by
>> default for radeon and deprecate it in TTM as well.
>> Thoughts? Especially Alex what do you think.
> Works for me.
I will take that as an rb and commit at least the first patch.
More information about the dri-devel