[PATCH 2/2] drm/gpuvm: Add locking helpers

Danilo Krummrich dakr at redhat.com
Sat Jun 14 10:38:53 UTC 2025


On Fri, Jun 13, 2025 at 04:57:03PM -0700, Rob Clark wrote:
> For UNMAP/REMAP steps we could be needing to lock objects that are not
> explicitly listed in the VM_BIND ioctl in order to tear-down unmapped
> VAs.  These helpers handle locking/preparing the needed objects.

Yes, that's a common use-case. I think drivers typically iterate through their
drm_gpuva_ops to lock those objects.

I had a look at you link [1] and it seems that you keep a list of ops as well by
calling vm_op_enqueue() with a new struct msm_vm_op from the callbacks.

Please note that for exactly this case there is the op_alloc callback in
struct drm_gpuvm_ops, such that you can allocate a custom op type (i.e. struct
msm_vm_op) that embedds a struct drm_gpuva_op.

Given that, I think the proposed locking helpers here would make more sense to
operate on struct drm_gpuva_ops, rather than the callbacks.

Besides that, few comments below.

--

One additional unrelated note, please don't use BUG_ON() in your default op
switch case. Hitting this case in a driver does not justify to panic the whole
kernel. BUG() should only be used if the machine really hits an unrecoverable
state. Please prefer a WARN_ON() splat instead.

[1] https://gitlab.freedesktop.org/robclark/msm/-/blob/sparse-newer/drivers/gpu/drm/msm/msm_gem_vma.c?ref_type=heads#L1150

> Note that these functions do not strictly require the VM changes to be
> applied before the next drm_gpuvm_sm_map_lock()/_unmap_lock() call.  In
> the case that VM changes from an earlier drm_gpuvm_sm_map()/_unmap()
> call result in a differing sequence of steps when the VM changes are
> actually applied, it will be the same set of GEM objects involved, so
> the locking is still correct.

I'm not sure about this part, how can we be sure that's the case?

> Signed-off-by: Rob Clark <robin.clark at oss.qualcomm.com>
> ---
>  drivers/gpu/drm/drm_gpuvm.c | 81 +++++++++++++++++++++++++++++++++++++
>  include/drm/drm_gpuvm.h     |  8 ++++
>  2 files changed, 89 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index 0ca717130541..197066dd390b 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2390,6 +2390,87 @@ drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
>  }
>  EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap);
>  
> +static int
> +drm_gpuva_sm_step_lock(struct drm_gpuva_op *op, void *priv)
> +{
> +	struct drm_exec *exec = priv;
> +
> +	switch (op->op) {
> +	case DRM_GPUVA_OP_REMAP:
> +		if (op->remap.unmap->va->gem.obj)
> +			return drm_exec_lock_obj(exec, op->remap.unmap->va->gem.obj);
> +		return 0;
> +	case DRM_GPUVA_OP_UNMAP:
> +		if (op->unmap.va->gem.obj)
> +			return drm_exec_lock_obj(exec, op->unmap.va->gem.obj);
> +		return 0;
> +	default:
> +		return 0;
> +	}
> +}
> +
> +static const struct drm_gpuvm_ops lock_ops = {
> +	.sm_step_map = drm_gpuva_sm_step_lock,
> +	.sm_step_remap = drm_gpuva_sm_step_lock,
> +	.sm_step_unmap = drm_gpuva_sm_step_lock,
> +};
> +
> +/**
> + * drm_gpuvm_sm_map_lock() - locks the objects touched by a drm_gpuvm_sm_map()

I think we should name this drm_gpuvm_sm_map_exec_lock() since it only makes
sense to call this from some drm_exec loop.

> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> + * @exec: the &drm_exec locking context
> + * @num_fences: for newly mapped objects, the # of fences to reserve
> + * @req_addr: the start address of the range to unmap
> + * @req_range: the range of the mappings to unmap
> + * @req_obj: the &drm_gem_object to map
> + * @req_offset: the offset within the &drm_gem_object
> + *
> + * This function locks (drm_exec_lock_obj()) objects that will be unmapped/
> + * remapped, and locks+prepares (drm_exec_prepare_object()) objects that
> + * will be newly mapped.
> + *
> + * Returns: 0 on success or a negative error code
> + */
> +int
> +drm_gpuvm_sm_map_lock(struct drm_gpuvm *gpuvm,
> +		      struct drm_exec *exec, unsigned int num_fences,
> +		      u64 req_addr, u64 req_range,
> +		      struct drm_gem_object *req_obj, u64 req_offset)
> +{
> +	if (req_obj) {
> +		int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
> +		if (ret)
> +			return ret;
> +	}

Let's move this to drm_gpuva_sm_step_lock().

> +
> +	return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
> +				  req_addr, req_range,
> +				  req_obj, req_offset);
> +
> +}
> +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_lock);



More information about the dri-devel mailing list