[PATCH v6 0/2] Add p2p via dmabuf to habanalabs
Oded Gabbay
ogabbay at kernel.org
Wed Sep 15 07:45:36 UTC 2021
On Tue, Sep 14, 2021 at 7:12 PM Jason Gunthorpe <jgg at ziepe.ca> wrote:
>
> On Tue, Sep 14, 2021 at 04:18:31PM +0200, Daniel Vetter wrote:
> > On Sun, Sep 12, 2021 at 07:53:07PM +0300, Oded Gabbay wrote:
> > > Hi,
> > > Re-sending this patch-set following the release of our user-space TPC
> > > compiler and runtime library.
> > >
> > > I would appreciate a review on this.
> >
> > I think the big open we have is the entire revoke discussions. Having the
> > option to let dma-buf hang around which map to random local memory ranges,
> > without clear ownership link and a way to kill it sounds bad to me.
> >
> > I think there's a few options:
> > - We require revoke support. But I've heard rdma really doesn't like that,
> > I guess because taking out an MR while holding the dma_resv_lock would
> > be an inversion, so can't be done. Jason, can you recap what exactly the
> > hold-up was again that makes this a no-go?
>
> RDMA HW can't do revoke.
>
> So we have to exclude almost all the HW and several interesting use
> cases to enable a revoke operation.
>
> > - For non-revokable things like these dma-buf we'd keep a drm_master
> > reference around. This would prevent the next open to acquire
> > ownership rights, which at least prevents all the nasty potential
> > problems.
>
> This is what I generally would expect, the DMABUF FD and its DMA
> memory just floats about until the unrevokable user releases it, which
> happens when the FD that is driving the import eventually gets closed.
This is exactly what we are doing in the driver. We make sure
everything is valid until the unrevokable user releases it and that
happens only when the dmabuf fd gets closed.
And the user can't close it's fd of the device until he performs the
above, so there is no leakage between users.
>
> I still don't think any of the complexity is needed, pinnable memory
> is a thing in Linux, just account for it in mlocked and that is
> enough.
>
> Jason
More information about the amd-gfx
mailing list