[PATCH 05/12] dma-buf: add explicit buffer pinning

Daniel Vetter daniel at ffwll.ch
Wed Apr 17 14:30:51 UTC 2019


On Wed, Apr 17, 2019 at 04:20:02PM +0200, Daniel Vetter wrote:
> On Tue, Apr 16, 2019 at 08:38:34PM +0200, Christian König wrote:
> > Add optional explicit pinning callbacks instead of implicitly assume the
> > exporter pins the buffer when a mapping is created.
> > 
> > Signed-off-by: Christian König <christian.koenig at amd.com>
> 
> Don't we need this together with the invalidate callback and the dynamic
> stuff? Also I'm assuming that pin/unpin is pretty much required for
> dynamic bo, so could we look at these callbacks instead of the dynamic
> flag you add in patch 1.
> 
> I'm assuming following rules hold:
> no pin/upin from exporter:
> 
> dma-buf is not dynamic, and pinned for the duration of map/unmap. I'm
> not 100% sure whether really everyone wants the mapping to be cached for
> the entire attachment, only drm_prime does that. And that's not the only
> dma-buf importer.
> 
> pin/unpin calls are noops.
> 
> pin/unpin exist in the exporter, but importer has not provided an
> invalidate callback:
> 
> We map at attach time, and we also have to pin, since the importer can't
> handle the buffer disappearing, at attach time. We unmap/unpin at detach.

For this case we should have a WARN in pin/unpin, to make sure importers
don't do something stupid. One more thought below on pin/unpin.

> pin/unpin from exporter, invalidate from importer:
> 
> Full dynamic mapping. We assume the importer will do caching, attach
> fences as needed, and pin the underlying bo when it needs it it
> permanently, without attaching fences (i.e. the scanout case).
> 
> Assuming I'm not terribly off with my understanding, then I think it'd be
> best to introduce the entire new dma-buf api in the first patch, and flesh
> it out later. Instead of spread over a few patches. Plus the above (maybe
> prettier) as a nice kerneldoc overview comment for how dynamic dma-buf is
> supposed to work really.
> -Daniel
> 
> > ---
> >  drivers/dma-buf/dma-buf.c | 39 +++++++++++++++++++++++++++++++++++++++
> >  include/linux/dma-buf.h   | 37 +++++++++++++++++++++++++++++++------
> >  2 files changed, 70 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> > index a3738fab3927..f23ff8355505 100644
> > --- a/drivers/dma-buf/dma-buf.c
> > +++ b/drivers/dma-buf/dma-buf.c
> > @@ -630,6 +630,41 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
> >  }
> >  EXPORT_SYMBOL_GPL(dma_buf_detach);
> >  
> > +/**
> > + * dma_buf_pin - Lock down the DMA-buf
> > + *
> > + * @dmabuf:	[in]	DMA-buf to lock down.
> > + *
> > + * Returns:
> > + * 0 on success, negative error code on failure.
> > + */
> > +int dma_buf_pin(struct dma_buf *dmabuf)

Hm, I think it'd be better to pin the attachment, not the underlying
buffer. Attachment is the thin the importer will have to pin, and it's at
attach/detach time where dma-buf needs to pin for importers who don't
understand dynamic buffer sharing.

Plus when we put that onto attachments, we can do a

	WARN_ON(!attach->invalidate);

sanity check. I think that would be good to have.
-Daniel

> > +{
> > +	int ret = 0;
> > +
> > +	reservation_object_assert_held(dmabuf->resv);
> > +
> > +	if (dmabuf->ops->pin)
> > +		ret = dmabuf->ops->pin(dmabuf);
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(dma_buf_pin);
> > +
> > +/**
> > + * dma_buf_unpin - Remove lock from DMA-buf
> > + *
> > + * @dmabuf:	[in]	DMA-buf to unlock.
> > + */
> > +void dma_buf_unpin(struct dma_buf *dmabuf)
> > +{
> > +	reservation_object_assert_held(dmabuf->resv);
> > +
> > +	if (dmabuf->ops->unpin)
> > +		dmabuf->ops->unpin(dmabuf);
> > +}
> > +EXPORT_SYMBOL_GPL(dma_buf_unpin);
> > +
> >  /**
> >   * dma_buf_map_attachment_locked - Maps the buffer into _device_ address space
> >   * with the reservation lock held. Is a wrapper for map_dma_buf() of the
> > @@ -666,6 +701,8 @@ dma_buf_map_attachment_locked(struct dma_buf_attachment *attach,
> >  	 */
> >  	if (attach->invalidate)
> >  		list_del(&attach->node);
> > +	else
> > +		dma_buf_pin(attach->dmabuf);
> >  	sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
> >  	if (attach->invalidate)
> >  		list_add(&attach->node, &attach->dmabuf->attachments);
> > @@ -735,6 +772,8 @@ void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach,
> >  
> >  	attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
> >  						direction);
> > +	if (!attach->invalidate)
> > +		dma_buf_unpin(attach->dmabuf);
> >  }
> >  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment_locked);
> >  
> > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> > index ece4638359a8..a615b74e5894 100644
> > --- a/include/linux/dma-buf.h
> > +++ b/include/linux/dma-buf.h
> > @@ -100,14 +100,40 @@ struct dma_buf_ops {
> >  	 */
> >  	void (*detach)(struct dma_buf *, struct dma_buf_attachment *);
> >  
> > +	/**
> > +	 * @pin_dma_buf:
> > +	 *
> > +	 * This is called by dma_buf_pin and lets the exporter know that an
> > +	 * importer assumes that the DMA-buf can't be invalidated any more.
> > +	 *
> > +	 * This is called with the dmabuf->resv object locked.
> > +	 *
> > +	 * This callback is optional.
> > +	 *
> > +	 * Returns:
> > +	 *
> > +	 * 0 on success, negative error code on failure.
> > +	 */
> > +	int (*pin)(struct dma_buf *);
> > +
> > +	/**
> > +	 * @unpin_dma_buf:
> > +	 *
> > +	 * This is called by dma_buf_unpin and lets the exporter know that an
> > +	 * importer doesn't need to the DMA-buf to stay were it is any more.
> > +	 *
> > +	 * This is called with the dmabuf->resv object locked.
> > +	 *
> > +	 * This callback is optional.
> > +	 */
> > +	void (*unpin)(struct dma_buf *);
> > +
> >  	/**
> >  	 * @map_dma_buf:
> >  	 *
> >  	 * This is called by dma_buf_map_attachment() and is used to map a
> >  	 * shared &dma_buf into device address space, and it is mandatory. It
> > -	 * can only be called if @attach has been called successfully. This
> > -	 * essentially pins the DMA buffer into place, and it cannot be moved
> > -	 * any more
> > +	 * can only be called if @attach has been called successfully.
> >  	 *
> >  	 * This call may sleep, e.g. when the backing storage first needs to be
> >  	 * allocated, or moved to a location suitable for all currently attached
> > @@ -148,9 +174,6 @@ struct dma_buf_ops {
> >  	 *
> >  	 * This is called by dma_buf_unmap_attachment() and should unmap and
> >  	 * release the &sg_table allocated in @map_dma_buf, and it is mandatory.
> > -	 * It should also unpin the backing storage if this is the last mapping
> > -	 * of the DMA buffer, it the exporter supports backing storage
> > -	 * migration.
> >  	 *
> >  	 * This is always called with the dmabuf->resv object locked when
> >  	 * no_sgt_cache is true.
> > @@ -442,6 +465,8 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags);
> >  struct dma_buf *dma_buf_get(int fd);
> >  void dma_buf_put(struct dma_buf *dmabuf);
> >  
> > +int dma_buf_pin(struct dma_buf *dmabuf);
> > +void dma_buf_unpin(struct dma_buf *dmabuf);
> >  struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *,
> >  					       enum dma_data_direction);
> >  struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
> > -- 
> > 2.17.1
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list