[Intel-gfx] [PATCH v2] drm/i915/userptr: Probe vma range before gup
Chris Wilson
chris at chris-wilson.co.uk
Fri Dec 15 10:06:41 UTC 2017
Quoting Chris Wilson (2017-12-15 09:53:45)
> We want to exclude any GGTT objects from being present on our internal
> lists to avoid the deadlock we may run into with our requirement for
> struct_mutex during invalidate. However, if the gup_fast fails, we put
> the userptr onto the workqueue and mark it as active, so that we
> remember to serialise the worker upon mmu_invalidate.
>
> v2: Hold mmap_sem to prevent modifications to the mm while we probe and
> add ourselves to the interval-tree for notificiation.
>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104209
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> Cc: MichaĆ Winiarski <michal.winiarski at intel.com>
> ---
> drivers/gpu/drm/i915/i915_gem_userptr.c | 52 ++++++++++++++++++++++++++++++---
> 1 file changed, 48 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index 382a77a1097e..71971020562a 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -598,6 +598,37 @@ __i915_gem_userptr_get_pages_schedule(struct drm_i915_gem_object *obj)
> return ERR_PTR(-EAGAIN);
> }
>
> +static int
> +probe_range(struct mm_struct *mm, unsigned long addr, unsigned long len)
> +{
> + const unsigned long end = addr + len;
> + struct vm_area_struct *vma;
> + int ret = -EFAULT;
> +
> + for (vma = find_vma(mm, addr); vma; vma = vma->vm_next) {
> + if (vma->vm_start > addr)
> + break;
> +
> + /*
> + * Exclude any VMA that is backed only by struct_page, i.e.
> + * IO regions that include our own GGTT mmaps. We cannot handle
> + * such ranges, as we may encounter deadlocks around our
> + * struct_mutex on mmu_invalidate_range.
> + */
> + if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
> + break;
> +
> + if (vma->vm_end >= end) {
> + ret = 0;
> + break;
> + }
> +
> + addr = vma->vm_end;
> + }
> +
> + return ret;
> +}
> +
> static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
> {
> const int num_pages = obj->base.size >> PAGE_SHIFT;
> @@ -632,9 +663,17 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
> return -EAGAIN;
> }
>
> - pvec = NULL;
> - pinned = 0;
> + /* Quickly exclude any invalid VMA */
> + down_read(&mm->mmap_sem);
> + pinned = probe_range(mm, obj->userptr.ptr, obj->base.size);
> + if (pinned)
> + goto err_mmap_sem;
> +
> + pinned = __i915_gem_userptr_set_active(obj, true);
> + if (pinned)
> + goto err_mmap_sem;
>
> + pvec = NULL;
> if (mm == current->mm) {
> pvec = kvmalloc_array(num_pages, sizeof(struct page *),
> GFP_KERNEL |
> @@ -658,14 +697,19 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
> pages = __i915_gem_userptr_alloc_pages(obj, pvec, num_pages);
> active = !IS_ERR(pages);
> }
> - if (active)
> - __i915_gem_userptr_set_active(obj, true);
> + if (!active)
> + __i915_gem_userptr_set_active(obj, false);
> + up_read(&mm->mmap_sem);
Otoh, if we are holding mmap_sem all this time, we don't need to call
set_active twice as we should be serialized with mmu-invalidate.
-Chris
More information about the Intel-gfx
mailing list