[PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region
Jason Gunthorpe
jgg at nvidia.com
Sat Oct 17 00:28:16 UTC 2020
On Thu, Oct 15, 2020 at 03:02:45PM -0700, Jianxin Xiong wrote:
> +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device,
> + unsigned long addr, size_t size,
> + int dmabuf_fd, int access,
> + const struct ib_umem_dmabuf_ops *ops)
> +{
> + struct dma_buf *dmabuf;
> + struct ib_umem_dmabuf *umem_dmabuf;
> + struct ib_umem *umem;
> + unsigned long end;
> + long ret;
> +
> + if (check_add_overflow(addr, (unsigned long)size, &end))
> + return ERR_PTR(-EINVAL);
> +
> + if (unlikely(PAGE_ALIGN(end) < PAGE_SIZE))
> + return ERR_PTR(-EINVAL);
> +
> + if (unlikely(!ops || !ops->invalidate || !ops->update))
> + return ERR_PTR(-EINVAL);
> +
> + umem_dmabuf = kzalloc(sizeof(*umem_dmabuf), GFP_KERNEL);
> + if (!umem_dmabuf)
> + return ERR_PTR(-ENOMEM);
> +
> + umem_dmabuf->ops = ops;
> + INIT_WORK(&umem_dmabuf->work, ib_umem_dmabuf_work);
> +
> + umem = &umem_dmabuf->umem;
> + umem->ibdev = device;
> + umem->length = size;
> + umem->address = addr;
addr here is offset within the dma buf, but this code does nothing
with it.
dma_buf_map_attachment gives a complete SGL for the entire DMA buf,
but offset/length select a subset.
You need to edit the sgls to make them properly span the sub-range and
follow the peculiar rules for how SGLs in ib_umem's have to be
constructed.
Who validates that the total dma length of the SGL is exactly equal to
length? That is really important too.
Also, dma_buf_map_attachment() does not do the correct dma mapping for
RDMA, eg it does not use ib_dma_map(). This is not a problem for mlx5
but it is troublesome to put in the core code.
Jason
More information about the dri-devel
mailing list