[PATCH 5/7] drm/vgem: prime export support

Tomasz Stanislawski t.stanislaws at samsung.com
Mon Feb 27 07:43:02 PST 2012


Hi Ben and Sumit,

Please refer to comments below:

On 02/22/2012 08:29 PM, Ben Widawsky wrote:
> dma-buf export implementation. Heavily influenced by Dave Airlie's proof
> of concept work.
>
> Cc: Daniel Vetter<daniel.vetter at ffwll.ch>
> Cc: Dave Airlie<airlied at redhat.com>
> Signed-off-by: Ben Widawsky<ben at bwidawsk.net>
> ---
>   drivers/gpu/drm/vgem/Makefile       |    2 +-
>   drivers/gpu/drm/vgem/vgem_dma_buf.c |  128 +++++++++++++++++++++++++++++++++++
>   drivers/gpu/drm/vgem/vgem_drv.c     |    6 ++
>   drivers/gpu/drm/vgem/vgem_drv.h     |    7 ++
>   4 files changed, 142 insertions(+), 1 deletions(-)
>   create mode 100644 drivers/gpu/drm/vgem/vgem_dma_buf.c
>

[snip]

> +struct sg_table *vgem_gem_map_dma_buf(struct dma_buf_attachment *attachment,
> +                                      enum dma_data_direction dir)
> +{
> +	struct drm_vgem_gem_object *obj = attachment->dmabuf->priv;
> +	struct sg_table *sg;
> +	int ret;
> +
> +	ret = vgem_gem_get_pages(obj);
> +	if (ret) {
> +		vgem_gem_put_pages(obj);

is it safe to call vgem_gem_put_pages if vgem_gem_get_pages failed (I 
mean ret < 0 was returned) ?

> +		return NULL;
> +	}
> +
> +	BUG_ON(obj->pages == NULL);
> +
> +	sg = drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE);

if drm_prime_pages_to_sg fails then you should call vgem_gem_put_pages 
for cleanup.

> +	return sg;
> +}

The documentation of DMABUF says fallowing words about map_dma_buf callback.

"It is one of the buffer operations that must be implemented by the 
exporter. It should return the sg_table containing scatterlist for this 
buffer, mapped into caller's address space."

The scatterlist returned by drm_prime_pages_to_sg is not mapped to any 
device. The map_dma_buf callback should call dma_map_sg for caller 
device, which is attachment->dev. Otherwise the implementation is not 
consistent with the DMABUF spec.

I think that it should be chosen to change implementation in map_dma_buf 
in DRM drivers or to drop mentioned sentence from the DMABUF spec.

I bear to the second solution because IMO there is no reliable way of 
mapping a scatterlist to importer device by an exporter. The dma_map_sg 
could be used but not all drivers makes use of DMA framework.

Sumit, what is your opinion about it?

> +
> +void vgem_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
> +			    struct sg_table *sg)
> +{
> +	sg_free_table(sg);
> +	kfree(sg);
> +}
> +
> +void vgem_gem_release(struct dma_buf *buf)
> +{
> +	struct drm_vgem_gem_object *obj = buf->priv;
> +
> +	BUG_ON(buf != obj->base.export_dma_buf);
> +
> +	obj->base.prime_fd = -1;
> +	obj->base.export_dma_buf = NULL;
> +	drm_gem_object_unreference_unlocked(&obj->base);
> +}
> +
> +struct dma_buf_ops vgem_dmabuf_ops = {
> +	.map_dma_buf = vgem_gem_map_dma_buf,
> +	.unmap_dma_buf = vgem_gem_unmap_dma_buf,
> +	.release = vgem_gem_release
> +};
> +

Regards,
Tomasz Stanislawski



More information about the dri-devel mailing list