[Nouveau] [PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages

Terje Bergstrom tbergstrom at nvidia.com
Thu Apr 16 12:55:16 PDT 2015


On 04/16/2015 12:31 PM, Ilia Mirkin wrote:
> Two questions --
>
> (a) What's the perf impact of doing this? Less work for the GPU MMU
> but more work for the IOMMU...
> (b) Would it be a good idea to do this for desktop GPUs that are on
> CPUs with IOMMUs in them (VT-d and whatever the AMD one is)? Is there
> some sort of shared API for this stuff that you should be (or are?)
> using?
a) Using IOMMU mapping is the best way of getting contiguous post-GMMU 
address spaces. The continuity is required to be able to use frame 
buffer compression. So overall performance impact when compression is 
factored in is about 20-30%.

If compression is left out of the equation, the impact SMMU translation 
and small versus large pages should not be noticeable, but I haven't 
measured it. We have measured large versus small pages with compression 
disabled in both cases in gk20a and the difference was noise.

Additional advantage is extra protection against GPU accidentally 
walking over kernel memory if kernel driver has a bug.

b) This is a Tegra specific mechanism, and for dGPU sysmem is handled 
differently, so I don't have a good answer to that. I *believe* in dGPU 
sysmem does not support compression, so it would be a question of memory 
protection, not performance.

(I'm hoping this email does not get added a corporate boilerplate - if 
it does, I apologize and feel free to ignore)

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------


More information about the Nouveau mailing list