[PATCH v13 1/4] RDMA/umem: Support importing dma-buf as user memory region
Leon Romanovsky
leon at kernel.org
Tue Dec 8 19:02:52 UTC 2020
On Tue, Dec 08, 2020 at 06:13:20PM +0000, Xiong, Jianxin wrote:
> > -----Original Message-----
> > From: Leon Romanovsky <leon at kernel.org>
> > Sent: Monday, December 07, 2020 11:06 PM
> > To: Xiong, Jianxin <jianxin.xiong at intel.com>
> > Cc: linux-rdma at vger.kernel.org; dri-devel at lists.freedesktop.org; Doug Ledford <dledford at redhat.com>; Jason Gunthorpe <jgg at ziepe.ca>;
> > Sumit Semwal <sumit.semwal at linaro.org>; Christian Koenig <christian.koenig at amd.com>; Vetter, Daniel <daniel.vetter at intel.com>
> > Subject: Re: [PATCH v13 1/4] RDMA/umem: Support importing dma-buf as user memory region
> >
> > On Mon, Dec 07, 2020 at 02:15:50PM -0800, Jianxin Xiong wrote:
> > > Dma-buf is a standard cross-driver buffer sharing mechanism that can
> > > be used to support peer-to-peer access from RDMA devices.
> > >
> > > Device memory exported via dma-buf is associated with a file descriptor.
> > > This is passed to the user space as a property associated with the
> > > buffer allocation. When the buffer is registered as a memory region,
> > > the file descriptor is passed to the RDMA driver along with other
> > > parameters.
> > >
> > > Implement the common code for importing dma-buf object and mapping
> > > dma-buf pages.
> > >
> > > Signed-off-by: Jianxin Xiong <jianxin.xiong at intel.com>
> > > Reviewed-by: Sean Hefty <sean.hefty at intel.com>
> > > Acked-by: Michael J. Ruhl <michael.j.ruhl at intel.com>
> > > Acked-by: Christian Koenig <christian.koenig at amd.com>
> > > Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> > >
> > > Conflicts:
> > > include/rdma/ib_umem.h
> >
<...>
> > > + /*
> > > + * Although the sg list is valid now, the content of the pages
> > > + * may be not up-to-date. Wait for the exporter to finish
> > > + * the migration.
> > > + */
> > > + fence = dma_resv_get_excl(umem_dmabuf->attach->dmabuf->resv);
> > > + if (fence)
> > > + dma_fence_wait(fence, false);
> >
> > Any reason do not check return result from dma_fence_wait()?
>
> This is called with interruptible flag set to false and normally should only return 0.
> I do see similar usage cases that check the result and don't check the result. Maybe
> we can add a WARN_ON here?
I have no idea :), just saw that other places check returned value.
<...>
> > > +
> > > +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device,
> > > + unsigned long offset, size_t size,
> > > + int fd, int access,
> > > + const struct dma_buf_attach_ops *ops) {
> > > + struct dma_buf *dmabuf;
> > > + struct ib_umem_dmabuf *umem_dmabuf;
> > > + struct ib_umem *umem;
> > > + unsigned long end;
> > > + long ret = -EINVAL;
> >
> > It is wrong type for the returned value. One of the possible options is to declare "struct ib_umem *ret;" and set ret = ERR_PTR(-EINVAL) or
> > ret = ERR_CAST(dmabuf);
>
> At the actual point the value is returned, ERR_PTR(ret) is used. I think we can change the
> variable name to "err" instead to avoid confusion.
The point is that "ret" should be declared as "struct ib_umem *" and not
as "long" and ERR_CAST() should be used instead of (void *).
<...>
> > > +static inline struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device,
> > > + unsigned long offset,
> > > + size_t size, int fd,
> > > + int access,
> > > + struct dma_buf_attach_ops *ops) {
> > > + return ERR_PTR(-EINVAL);
> >
> > Probably, It should be EOPNOTSUPP and not EINVAL.
>
> EINVAL is used here to be consistent with existing definitions in the same file.
ok
Thanks
More information about the dri-devel
mailing list