[Nouveau] [PATCH 4/6] drm: enable big page mapping for small pages when IOMMU is available
Terje Bergstrom
tbergstrom at nvidia.com
Fri Apr 17 08:24:46 PDT 2015
On 04/17/2015 12:28 AM, Vince Hsu wrote:
>
> On 04/17/2015 02:26 PM, Alexandre Courbot wrote:
>> I wonder if we should not just use the same size heuristics as for
>> VRAM above?
>>
>> Here, unless your buffer size is an exact multiple of the big page
>> size (128K for GK20A), you will not use big pages at all. In effect,
>> many buffers will be rejected for this reason. A behavior like "if the
>> buffer size of more than 256KB, increase the size of the buffer to the
>> next multiple of 128K and use big pages" would probably yield better
>> results.
> I'm told that the user space would align the buffer to the big page
> size. So I made this patch based on that impression. I'm not pretty
> familiar with the user space behavior, so sorry if I say something
> wrong here.
>
> And are we allowed to extend the buffer size even the exporter doesn't
> know about that?
The alignment of IOMMU VA base address and size has to be correct. As a
consequence you could think that this means that the allocated size of
the buffer has to be big page aligned.
You could also just allocate buffer actual size with whatever alignment,
and take care of alignment at buffer mapping time. In case buffer is
120k, we could allocate 128k of IOMMU VA at 128k aligned base address,
and leave the rest 8k IOMMU unmapped, and map it as a single large page
to GPU MMU. But that would allow triggering IOMMU fault from GPU command
buffers, and IOMMU faults are much more difficult to debug than GPU MMU
faults, so I don't recommend this.
So in summary, I prefer taking care of buffer size alignment at
allocation time, and IOMMU VA base alignment at mapping time.
More information about the Nouveau
mailing list