[PATCH v19 30/30] drm/panfrost: Switch to generic memory shrinker
Boris Brezillon
boris.brezillon at collabora.com
Thu Jan 25 09:49:32 UTC 2024
On Fri, 5 Jan 2024 21:46:24 +0300
Dmitry Osipenko <dmitry.osipenko at collabora.com> wrote:
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -328,6 +328,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
> struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
> struct sg_table *sgt;
> int prot = IOMMU_READ | IOMMU_WRITE;
> + int ret = 0;
>
> if (WARN_ON(mapping->active))
> return 0;
> @@ -335,15 +336,32 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
> if (bo->noexec)
> prot |= IOMMU_NOEXEC;
>
> + if (!obj->import_attach) {
> + /*
> + * Don't allow shrinker to move pages while pages are mapped.
> + * It's fine to move pages afterwards because shrinker will
> + * take care of unmapping pages during eviction.
> + */
That's not exactly what this shmem_pin() is about, is it? I think it's
here to meet the drm_gem_shmem_get_pages_sgt() rule stating that pages
must be pinned while the sgt returned by drm_gem_shmem_get_pages_sgt()
is manipulated. You actually unpin the GEM just after the mmu_map_sg()
call, which means pages could very well be reclaimed while the MMU
still has a mapping referencing those physical pages. And that's fine,
because what's supposed to protect against that is the fence we
register to the GEM resv at job submission time.
> + ret = drm_gem_shmem_pin(shmem);
> + if (ret)
> + return ret;
> + }
> +
> sgt = drm_gem_shmem_get_pages_sgt(shmem);
> - if (WARN_ON(IS_ERR(sgt)))
> - return PTR_ERR(sgt);
> + if (WARN_ON(IS_ERR(sgt))) {
> + ret = PTR_ERR(sgt);
> + goto unpin;
> + }
>
> mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT,
> prot, sgt);
> mapping->active = true;
>
> - return 0;
> +unpin:
> + if (!obj->import_attach)
> + drm_gem_shmem_unpin(shmem);
> +
> + return ret;
> }
More information about the dri-devel
mailing list