[Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support

Connor Abbott cwabbott0 at gmail.com
Mon May 19 22:23:00 UTC 2025


On Mon, May 19, 2025 at 5:51 PM Rob Clark <robdclark at gmail.com> wrote:
>
> On Mon, May 19, 2025 at 2:45 PM Dave Airlie <airlied at gmail.com> wrote:
> >
> > On Tue, 20 May 2025 at 07:25, Rob Clark <robdclark at gmail.com> wrote:
> > >
> > > On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airlied at gmail.com> wrote:
> > > >
> > > > On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark at gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark at chromium.org>
> > > > >
> > > > > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> > > > > Memory[2] in the form of:
> > > > >
> > > > > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> > > > >    MAP_NULL/UNMAP commands
> > > > >
> > > > > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> > > > >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> > > > >
> > > > > I did not implement support for synchronous VM_BIND commands.  Since
> > > > > userspace could just immediately wait for the `SUBMIT` to complete, I don't
> > > > > think we need this extra complexity in the kernel.  Synchronous/immediate
> > > > > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
> > > >
> > > > This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> > > > synchronous, you are adding an extra ioctl to wait, or do you manage
> > > > these via a different mechanism?
> > >
> > > Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
> > > the binds happen before cmd execution
> > >
> > > When it comes to UAPI, it's easier to add something later, than to
> > > take something away, so I don't see a problem adding synchronous binds
> > > later if that proves to be needed.  But I don't think it is.
> >
> > I'm not 100% sure that is conformant behaviour to the vulkan spec,
> >
> > Two questions come to mind:
> > 1. where is this out fence stored? vulkan being explicit with no
> > guarantee of threads doing things, seems like you'd need to use a lock
> > in the vulkan driver to store it, esp if multiple threads bind memory.
>
> turnip is protected dev->vm_bind_fence_fd with a u_rwlock

To add to that, it doesn't really matter the exact order the fence
gets updated because a Vulkan app can't use anything in a submit until
after we return from the turnip function that allocates + binds the BO
and then the Vulkan-level object is returned to the user. We just have
to make sure that the fence is "new enough" when we return the BO. It
doesn't matter if multiple threads are creating/destroying objects,
the thread doing the VkQueueSubmit() must have observed the creation
of all resources used in the submit and will therefore see a new
enough fence.

>
> > 2. If it's fine to lazy bind on the hw side, do you also handle the
> > case where something is bound and immediately freed, where does the
> > fence go then, do you wait for the fence before destroying things?
>
> right no turnip is just relying on the UNMAP/unbind going thru the
> same queue.. but I guess it could also use vm_bind_fence_fd as an
> in-fence.
>
> BR,
> -R

Yeah, we always submit all non-sparse map/unmap on the same queue so
they're always synchronized wrt each other. We destroy the GEM object
right away after submitting the final unmap and rely on the kernel to
hold a reference to the BO in the unmap job.

Connor


More information about the dri-devel mailing list