[PATCH 1/3] dma_resv: prime lockdep annotations

Koenig, Christian Christian.Koenig at amd.com
Tue Aug 20 14:56:36 UTC 2019


Am 20.08.19 um 16:53 schrieb Daniel Vetter:
> Full audit of everyone:
>
> - i915, radeon, amdgpu should be clean per their maintainers.
>
> - vram helpers should be fine, they don't do command submission, so
>    really no business holding struct_mutex while doing copy_*_user. But
>    I haven't checked them all.
>
> - panfrost seems to dma_resv_lock only in panfrost_job_push, which
>    looks clean.
>
> - v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(),
>    copying from/to userspace happens all in v3d_lookup_bos which is
>    outside of the critical section.
>
> - vmwgfx has a bunch of ioctls that do their own copy_*_user:
>    - vmw_execbuf_process: First this does some copies in
>      vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself.
>      Then comes the usual ttm reserve/validate sequence, then actual
>      submission/fencing, then unreserving, and finally some more
>      copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of
>      details, but looks all safe.
>    - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be
>      seen, seems to only create a fence and copy it out.
>    - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be
>      found there.
>    Summary: vmwgfx seems to be fine too.
>
> - virtio: There's virtio_gpu_execbuffer_ioctl, which does all the
>    copying from userspace before even looking up objects through their
>    handles, so safe. Plus the getparam/getcaps ioctl, also both safe.
>
> - qxl only has qxl_execbuffer_ioctl, which calls into
>    qxl_process_single_command. There's a lovely comment before the
>    __copy_from_user_inatomic that the slowpath should be copied from
>    i915, but I guess that never happened. Try not to be unlucky and get
>    your CS data evicted between when it's written and the kernel tries
>    to read it. The only other copy_from_user is for relocs, but those
>    are done before qxl_release_reserve_list(), which seems to be the
>    only thing reserving buffers (in the ttm/dma_resv sense) in that
>    code. So looks safe.
>
> - A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in
>    usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this
>    everywhere and needs to be fixed up.
>
> Cc: Alex Deucher <alexander.deucher at amd.com>
> Cc: Christian König <christian.koenig at amd.com>
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Thomas Zimmermann <tzimmermann at suse.de>
> Cc: Rob Herring <robh at kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso at collabora.com>
> Cc: Eric Anholt <eric at anholt.net>
> Cc: Dave Airlie <airlied at redhat.com>
> Cc: Gerd Hoffmann <kraxel at redhat.com>
> Cc: Ben Skeggs <bskeggs at redhat.com>
> Cc: "VMware Graphics" <linux-graphics-maintainer at vmware.com>
> Cc: Thomas Hellstrom <thellstrom at vmware.com>
> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>

Reviewed-by: Christian König <christian.koenig at amd.com>

> ---
>   drivers/dma-buf/dma-resv.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 42a8f3f11681..3edca10d3faf 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -34,6 +34,7 @@
>   
>   #include <linux/dma-resv.h>
>   #include <linux/export.h>
> +#include <linux/sched/mm.h>
>   
>   /**
>    * DOC: Reservation Object Overview
> @@ -107,6 +108,17 @@ void dma_resv_init(struct dma_resv *obj)
>   			&reservation_seqcount_class);
>   	RCU_INIT_POINTER(obj->fence, NULL);
>   	RCU_INIT_POINTER(obj->fence_excl, NULL);
> +
> +	if (IS_ENABLED(CONFIG_LOCKDEP)) {
> +		if (current->mm)
> +			down_read(&current->mm->mmap_sem);
> +		ww_mutex_lock(&obj->lock, NULL);
> +		fs_reclaim_acquire(GFP_KERNEL);
> +		fs_reclaim_release(GFP_KERNEL);
> +		ww_mutex_unlock(&obj->lock);
> +		if (current->mm)
> +			up_read(&current->mm->mmap_sem);
> +	}
>   }
>   EXPORT_SYMBOL(dma_resv_init);
>   



More information about the dri-devel mailing list