[PATCH RFT] drm/etnaviv: reap idle softpin mappings when necessary

Lucas Stach l.stach at pengutronix.de
Wed Mar 16 17:51:36 UTC 2022


Hi Guido,

Am Mittwoch, dem 16.03.2022 um 18:37 +0100 schrieb Guido Günther:
> Hi Lucas,
> On Fri, Dec 17, 2021 at 09:59:36PM +0100, Lucas Stach wrote:
> > Right now the only point where softpin mappings get removed from the
> > MMU context is when the mapped GEM object is destroyed. However,
> > userspace might want to reuse that address space before the object
> > is destroyed, which is a valid usage, as long as all mapping in that
> > region of the address space are no longer used by any GPU jobs.
> > 
> > Implement reaping of idle MMU mappings that would otherwise
> > prevent the insertion of a softpin mapping.
> 
> Looking at current Linus tree and next it seems the patch never got
> submitted. Is there anything missing?

Yes, there is still a interaction between the gem close path and this
reaping that I didn't get around yet to fully reason about. I'm not
confident in committing this change without thinking this through, as
it might introduce a kernel memory corruption.

Regards,
Lucas

> Cheers,
>  -- Guido
> 
> > 
> > Signed-off-by: Lucas Stach <l.stach at pengutronix.de>
> > ---
> >  drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 39 +++++++++++++++++++++++++++
> >  1 file changed, 39 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
> > index 9fb1a2aadbcb..9111288b4062 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
> > @@ -219,8 +219,47 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context,
> >  static int etnaviv_iommu_insert_exact(struct etnaviv_iommu_context *context,
> >  		   struct drm_mm_node *node, size_t size, u64 va)
> >  {
> > +	struct etnaviv_vram_mapping *m, *n;
> > +	struct drm_mm_node *scan_node;
> > +	LIST_HEAD(scan_list);
> > +	int ret;
> > +
> >  	lockdep_assert_held(&context->lock);
> >  
> > +	ret = drm_mm_insert_node_in_range(&context->mm, node, size, 0, 0, va,
> > +					  va + size, DRM_MM_INSERT_LOWEST);
> > +	if (ret != -ENOSPC)
> > +		return ret;
> > +
> > +	/*
> > +	 * When we can't insert the node, due to a existing mapping blocking
> > +	 * the address space, there are two possible reasons:
> > +	 * 1. Userspace genuinely messed up and tried to reuse address space
> > +	 * before the last job using this VMA has finished executing.
> > +	 * 2. The existing buffer mappings are idle, but the buffers are not
> > +	 * destroyed yet (likely due to being referenced by another context) in
> > +	 * which case the mappings will not be cleaned up and we must reap them
> > +	 * here to make space for the new mapping.
> > +	 */
> > +
> > +	drm_mm_for_each_node_in_range(scan_node, &context->mm, va, va + size) {
> > +		m = container_of(scan_node, struct etnaviv_vram_mapping,
> > +				 vram_node);
> > +
> > +		if (m->use)
> > +			return -ENOSPC;
> > +
> > +		list_add(&m->scan_node, &scan_list);
> > +	}
> > +
> > +	list_for_each_entry_safe(m, n, &scan_list, scan_node) {
> > +		etnaviv_iommu_remove_mapping(context, m);
> > +		etnaviv_iommu_context_put(m->context);
> > +		m->context = NULL;
> > +		list_del_init(&m->mmu_node);
> > +		list_del_init(&m->scan_node);
> > +	}
> > +
> >  	return drm_mm_insert_node_in_range(&context->mm, node, size, 0, 0, va,
> >  					   va + size, DRM_MM_INSERT_LOWEST);
> >  }
> > -- 
> > 2.31.1
> > 




More information about the etnaviv mailing list