[PATCH v9 4/5] RDMA/mlx5: Support dma-buf based userspace memory region
Jason Gunthorpe
jgg at ziepe.ca
Mon Nov 9 20:52:32 UTC 2020
On Mon, Nov 09, 2020 at 11:23:00AM -0800, Jianxin Xiong wrote:
> @@ -1291,8 +1303,11 @@ static struct mlx5_ib_mr *reg_create(struct ib_mr *ibmr, struct ib_pd *pd,
> int err;
> bool pg_cap = !!(MLX5_CAP_GEN(dev->mdev, pg));
>
> - page_size =
> - mlx5_umem_find_best_pgsz(umem, mkc, log_page_size, 0, iova);
> + if (umem->is_dmabuf)
> + page_size = ib_umem_find_best_pgsz(umem, PAGE_SIZE, iova);
> + else
> + page_size = mlx5_umem_find_best_pgsz(umem, mkc, log_page_size,
> + 0, iova);
Any place touching the sgl has to also hold the resv lock, and sgl
might be NULL since an invalidation could come in at any time, eg
before we get here.
You can avoid those problems by ingoring the SGL and hard wiring
PAGE_SIZE here
> +static int pagefault_dmabuf_mr(struct mlx5_ib_mr *mr, size_t bcnt,
> + u32 *bytes_mapped, u32 flags)
> +{
> + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(mr->umem);
> + u32 xlt_flags = 0;
> + int err;
> +
> + if (flags & MLX5_PF_FLAGS_ENABLE)
> + xlt_flags |= MLX5_IB_UPD_XLT_ENABLE;
> +
> + dma_resv_lock(umem_dmabuf->attach->dmabuf->resv, NULL);
> + err = ib_umem_dmabuf_map_pages(umem_dmabuf);
> + if (!err)
> + err = mlx5_ib_update_mr_pas(mr, xlt_flags);
This still has to call mlx5_umem_find_best_pgsz() each time the sgl
changes to ensure it is still Ok. Just checking that
mlx5_umem_find_best_pgsz() > PAGE_SIZE
and then throwing away the value is OK
Jason
More information about the dri-devel
mailing list