[Nouveau] [PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible

Alexandre Courbot gnurou at gmail.com
Fri Apr 17 02:11:09 PDT 2015


On Thu, Apr 16, 2015 at 8:06 PM, Vince Hsu <vinceh at nvidia.com> wrote:
> This patch implements a way to aggregate the small pages and make them be
> mapped as big page(s) by utilizing the platform IOMMU if supported. And then
> we can enable compression support for these big pages later.
>
> Signed-off-by: Vince Hsu <vinceh at nvidia.com>
> ---
>  drm/nouveau/include/nvkm/subdev/mmu.h |  16 ++++
>  drm/nouveau/nvkm/subdev/mmu/base.c    | 158 ++++++++++++++++++++++++++++++++--

I believe (although I may have missed something) that this patch (and
patch 6/6) can be rewritten such as these two files remain untouched.
IOW, no new data structures (because the PTE will contain all the
information you need), and no change to base.c (because IOMMU is
chip-specific logic, although one may argue that the use of the IOMMU
API makes it more generic).

But let's review the extra data structures first:

>  lib/include/nvif/os.h                 |  12 +++
>  3 files changed, 179 insertions(+), 7 deletions(-)
>
> diff --git a/drm/nouveau/include/nvkm/subdev/mmu.h b/drm/nouveau/include/nvkm/subdev/mmu.h
> index 3a5368776c31..3230d31a7971 100644
> --- a/drm/nouveau/include/nvkm/subdev/mmu.h
> +++ b/drm/nouveau/include/nvkm/subdev/mmu.h
> @@ -22,6 +22,8 @@ struct nvkm_vma {
>         struct nvkm_mm_node *node;
>         u64 offset;
>         u32 access;
> +       struct list_head bp;
> +       bool has_iommu_bp;

Whether a chunk of memory is mapped through the IOMMU can be tested by
checking if the IOMMU bit is set in the address recorded in the PTE.
So has_iommu_bp looks redundant here.

>  };
>
>  struct nvkm_vm {
> @@ -37,6 +39,13 @@ struct nvkm_vm {
>         u32 lpde;
>  };
>
> +struct nvkm_vm_bp_list {
> +       struct list_head head;
> +       u32 pde;
> +       u32 pte;
> +       void *priv;
> +};
> +

Tracking the PDE and PTE of each memory chunk can probably be avoided
if you change your unmapping strategy. Currently you are going through
the list of nvkm_vm_bp_list, but you know your PDE and PTE are always
going to be adjacent, since a nvkm_vma represents a contiguous block
in the GPU VA. So when unmapping, you can simply check for each PTE
entry whether the IOMMU bit is set, and unmap from the IOMMU space
after unmapping from the GPU VA space, in a loop similar to that of
nvkm_vm_unmap_at().

Then we only need priv. You are keeping the nvkm_mm_node of the IOMMU
space into it, and you need it to free the IOMMU VA space. If only we
could find another way to store it, we could get rid of the whole
structure and associated list_head in nvkm_vma...

I need to give it some more thoughts, and we will probably need to
change a few things in base.c to make the hooks more flexible, so
please give me some more time to think about it. :) I just wanted to
share my thoughts so far in case this puts you on track.


More information about the Nouveau mailing list