[PATCH v2] drm/panfrost: Add the panfrost_gem_mapping concept
Steven Price
steven.price at arm.com
Wed Dec 11 12:38:46 UTC 2019
On 10/12/2019 23:08, Rob Herring wrote:
> From: Boris Brezillon <boris.brezillon at collabora.com>
>
> With the introduction of per-FD address space, the same BO can be mapped
> in different address space if the BO is globally visible (GEM_FLINK)
> and opened in different context or if the dmabuf is self-imported. The
> current implementation does not take that case into account, and
> attaches the mapping directly to the panfrost_gem_object.
>
> Let's create a panfrost_gem_mapping struct and allow multiple mappings
> per BO.
>
> The mappings are refcounted which helps solve another problem where
> mappings were torn down (GEM handle closed by userspace) while GPU
> jobs accessing those BOs were still in-flight. Jobs now keep a
> reference on the mappings they use.
>
> v2 (robh):
> - Minor review comment clean-ups from Steven
> - Use list_is_singular helper
> - Just WARN if we add a mapping when madvise state is not WILLNEED.
> With that, drop the use of object_name_lock.
>
> Fixes: a5efb4c9a562 ("drm/panfrost: Restructure the GEM object creation")
> Fixes: 7282f7645d06 ("drm/panfrost: Implement per FD address spaces")
> Cc: <stable at vger.kernel.org>
> Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>
> Signed-off-by: Rob Herring <robh at kernel.org>
> ---
> I've hacked up IGT prime_self_import test to run on panfrost other than
> the 2 test which depend on i915 debugfs files (export-vs-gem_close-race,
> reimport-vs-gem_close-race). With this patch, they now pass.
>
> I'm not adding the test to IGT which is just a copy-n-paste of the
> original except for different wrappers for BO alloc and mmap. That
> should be fixed first IMO.
>
> Rob
>
> drivers/gpu/drm/panfrost/panfrost_drv.c | 91 +++++++++++--
> drivers/gpu/drm/panfrost/panfrost_gem.c | 123 +++++++++++++++---
> drivers/gpu/drm/panfrost/panfrost_gem.h | 41 +++++-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 3 +-
> drivers/gpu/drm/panfrost/panfrost_job.c | 13 +-
> drivers/gpu/drm/panfrost/panfrost_job.h | 1 +
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 61 +++++----
> drivers/gpu/drm/panfrost/panfrost_mmu.h | 6 +-
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 34 +++--
> 9 files changed, 299 insertions(+), 74 deletions(-)
>
<snip>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
> index fd766b1395fb..3a7862e3e775 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
> @@ -29,6 +29,12 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
> list_del_init(&bo->base.madv_list);
> mutex_unlock(&pfdev->shrinker_lock);
>
> + /*
> + * If we still have mappings attached to the BO, there's a problem in
> + * our refcounting.
> + */
> + WARN_ON_ONCE(!list_empty(&bo->mappings.list));
> +
> if (bo->sgts) {
> int i;
> int n_sgt = bo->base.base.size / SZ_2M;
> @@ -46,6 +52,68 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
> drm_gem_shmem_free_object(obj);
> }
>
> +struct panfrost_gem_mapping *
> +panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
> + struct panfrost_file_priv *priv)
> +{
> + struct panfrost_gem_mapping *iter;
> +
> + mutex_lock(&bo->mappings.lock);
> + list_for_each_entry(iter, &bo->mappings.list, node) {
> + if (iter->mmu == &priv->mmu) {
> + kref_get(&iter->refcount);
> + break;
> + }
> + }
> + mutex_unlock(&bo->mappings.lock);
> +
> + return iter;
If the entry isn't found then iter will equal
container_of(&bo->mappings.list, struct panfrost_gem_mapping, node) -
but you actually want a NULL return in this case.
I also think the previous version with a "member" variable being
returned was clearer.
Thanks,
Steve
More information about the dri-devel
mailing list