[Intel-gfx] [PATCH 04/25] drm/i915/userptr: Avoid struct_mutex recursion for mmu_invalidate_range_start

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Nov 6 14:11:02 UTC 2018


On 06/11/2018 13:02, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-11-06 12:41:53)
>>
>> On 02/11/2018 16:12, Chris Wilson wrote:
>>> Since commit 93065ac753e4 ("mm, oom: distinguish blockable mode for mmu
>>> notifiers") we have been able to report failure from
>>> mmu_invalidate_range_start which allows us to use a trylock on the
>>> struct_mutex to avoid potential recursion and report -EBUSY instead.
>>> Furthermore, this allows us to pull the work into the main callback and
>>> avoid the sleight-of-hand in using a workqueue to avoid lockdep.
>>>
>>> However, not all paths to mmu_invalidate_range_start are prepared to
>>> handle failure, so instead of reporting the recursion, deal with it.
>>
>> Please add a paragraph on how the patch deals with it.
> 
> Same as before, we cry into dmesg about it.

A few words about async cancel handling in put_pages as the approach 
taken would be enough.

>>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108375
>>> References: 93065ac753e4 ("mm, oom: distinguish blockable mode for mmu notifiers")
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> ---
>>>    drivers/gpu/drm/i915/i915_drv.h         |   4 +-
>>>    drivers/gpu/drm/i915/i915_gem.c         |  18 +-
>>>    drivers/gpu/drm/i915/i915_gem_object.h  |   7 +
>>>    drivers/gpu/drm/i915/i915_gem_userptr.c | 217 +++++++++++-------------
>>>    4 files changed, 120 insertions(+), 126 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>>> index 2a88a7eb871b..1056b12c3bc8 100644
>>> --- a/drivers/gpu/drm/i915/i915_drv.h
>>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>>> @@ -3073,8 +3073,8 @@ enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock */
>>>        I915_MM_SHRINKER
>>>    };
>>>    
>>> -void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>>> -                              enum i915_mm_subclass subclass);
>>> +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>>> +                             enum i915_mm_subclass subclass);
>>>    void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj);
>>>    
>>>    enum i915_map_type {
>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>>> index 93d09282710d..9a8af9454a53 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>> @@ -2429,8 +2429,8 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
>>>        struct sg_table *pages;
>>>    
>>>        pages = fetch_and_zero(&obj->mm.pages);
>>> -     if (!pages)
>>> -             return NULL;
>>> +     if (IS_ERR_OR_NULL(pages))
>>> +             return pages;
>>>    
>>>        spin_lock(&i915->mm.obj_lock);
>>>        list_del(&obj->mm.link);
>>> @@ -2454,17 +2454,16 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
>>>        return pages;
>>>    }
>>>    
>>> -void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>>> -                              enum i915_mm_subclass subclass)
>>> +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>>> +                             enum i915_mm_subclass subclass)
>>>    {
>>>        struct sg_table *pages;
>>> +     int ret = -EBUSY;
>>>    
>>>        if (i915_gem_object_has_pinned_pages(obj))
>>> -             return;
>>> +             return -EBUSY;
>>
>> Could return ret since you just loaded it with -EBUSY but not sure if it
>> makes any difference.
>>
>>>    
>>>        GEM_BUG_ON(obj->bind_count);
>>> -     if (!i915_gem_object_has_pages(obj))
>>> -             return;
>>>    
>>>        /* May be called by shrinker from within get_pages() (on another bo) */
>>>        mutex_lock_nested(&obj->mm.lock, subclass);
>>> @@ -2477,11 +2476,16 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
>>>         * lists early.
>>>         */
>>>        pages = __i915_gem_object_unset_pages(obj);
>>> +     if (!pages && !i915_gem_object_needs_async_cancel(obj))
>>
>> Would async_put_pages be better?
> 
> async_put_pages comes later. There's a large overlap here with the async
> get_pages work which has a lot of the same dilemmas with how to handle
> to cancellation of that work. This is a nasty midlayer hack, hopefully
> temporary, to avoid having to handle NULL pages in all the callbacks.

Ok.

>>> +             pages = ERR_PTR(-EINVAL);
>>>        if (!IS_ERR(pages))
>>>                obj->ops->put_pages(obj, pages);
>>>    
>>> +     ret = 0;
>>
>> Funnily enough, no callers seem to be bothered with the return code from
>> this function.
> 
> Since they followed the call with a warn if it didn't do the job. Now we
> have a path where we can push that warn to the caller.
> 
>>>    unlock:
>>>        mutex_unlock(&obj->mm.lock);
>>> +
>>> +     return ret;
>>>    }
>>>    
>>>    bool i915_sg_trim(struct sg_table *orig_st)
>>> diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
>>> index a6dd7c46de0d..49ce797173b5 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem_object.h
>>> +++ b/drivers/gpu/drm/i915/i915_gem_object.h
>>> @@ -56,6 +56,7 @@ struct drm_i915_gem_object_ops {
>>>    #define I915_GEM_OBJECT_HAS_STRUCT_PAGE     BIT(0)
>>>    #define I915_GEM_OBJECT_IS_SHRINKABLE       BIT(1)
>>>    #define I915_GEM_OBJECT_IS_PROXY    BIT(2)
>>> +#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(3)
>>
>> Somewhere we need a comment blurb explaining what async_cancel means.
>>
>>>    
>>>        /* Interface between the GEM object and its backing storage.
>>>         * get_pages() is called once prior to the use of the associated set
>>> @@ -386,6 +387,12 @@ i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
>>>        return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY;
>>>    }
>>>    
>>> +static inline bool
>>> +i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
>>> +{
>>> +     return obj->ops->flags & I915_GEM_OBJECT_ASYNC_CANCEL;
>>> +}
>>> +
>>>    static inline bool
>>>    i915_gem_object_is_active(const struct drm_i915_gem_object *obj)
>>>    {
>>> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
>>> index 2c9b284036d1..ab5ae426e27b 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
>>> @@ -50,79 +50,84 @@ struct i915_mmu_notifier {
>>>        struct hlist_node node;
>>>        struct mmu_notifier mn;
>>>        struct rb_root_cached objects;
>>> -     struct workqueue_struct *wq;
>>> +     struct i915_mm_struct *mm;
>>>    };
>>>    
>>>    struct i915_mmu_object {
>>>        struct i915_mmu_notifier *mn;
>>>        struct drm_i915_gem_object *obj;
>>>        struct interval_tree_node it;
>>> -     struct list_head link;
>>> -     struct work_struct work;
>>> -     bool attached;
>>>    };
>>>    
>>> -static void cancel_userptr(struct work_struct *work)
>>> -{
>>> -     struct i915_mmu_object *mo = container_of(work, typeof(*mo), work);
>>> -     struct drm_i915_gem_object *obj = mo->obj;
>>> -     struct work_struct *active;
>>> -
>>> -     /* Cancel any active worker and force us to re-evaluate gup */
>>> -     mutex_lock(&obj->mm.lock);
>>> -     active = fetch_and_zero(&obj->userptr.work);
>>> -     mutex_unlock(&obj->mm.lock);
>>> -     if (active)
>>> -             goto out;
>>> -
>>> -     i915_gem_object_wait(obj, I915_WAIT_ALL, MAX_SCHEDULE_TIMEOUT, NULL);
>>> -
>>> -     mutex_lock(&obj->base.dev->struct_mutex);
>>> -
>>> -     /* We are inside a kthread context and can't be interrupted */
>>> -     if (i915_gem_object_unbind(obj) == 0)
>>> -             __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
>>> -     WARN_ONCE(i915_gem_object_has_pages(obj),
>>> -               "Failed to release pages: bind_count=%d, pages_pin_count=%d, pin_global=%d\n",
>>> -               obj->bind_count,
>>> -               atomic_read(&obj->mm.pages_pin_count),
>>> -               obj->pin_global);
>>> -
>>> -     mutex_unlock(&obj->base.dev->struct_mutex);
>>> -
>>> -out:
>>> -     i915_gem_object_put(obj);
>>> -}
>>> -
>>>    static void add_object(struct i915_mmu_object *mo)
>>>    {
>>> -     if (mo->attached)
>>> +     if (!RB_EMPTY_NODE(&mo->it.rb))
>>>                return;
>>>    
>>>        interval_tree_insert(&mo->it, &mo->mn->objects);
>>> -     mo->attached = true;
>>>    }
>>>    
>>>    static void del_object(struct i915_mmu_object *mo)
>>>    {
>>> -     if (!mo->attached)
>>> +     if (RB_EMPTY_NODE(&mo->it.rb))
>>>                return;
>>>    
>>>        interval_tree_remove(&mo->it, &mo->mn->objects);
>>> -     mo->attached = false;
>>> +     RB_CLEAR_NODE(&mo->it.rb);
>>> +}
>>> +
>>> +static void
>>> +__i915_gem_userptr_set_active(struct drm_i915_gem_object *obj, bool value)
>>> +{
>>> +     struct i915_mmu_object *mo = obj->userptr.mmu_object;
>>> +
>>> +     /*
>>> +      * During mm_invalidate_range we need to cancel any userptr that
>>> +      * overlaps the range being invalidated. Doing so requires the
>>> +      * struct_mutex, and that risks recursion. In order to cause
>>> +      * recursion, the user must alias the userptr address space with
>>> +      * a GTT mmapping (possible with a MAP_FIXED) - then when we have
>>> +      * to invalidate that mmaping, mm_invalidate_range is called with
>>> +      * the userptr address *and* the struct_mutex held.  To prevent that
>>> +      * we set a flag under the i915_mmu_notifier spinlock to indicate
>>> +      * whether this object is valid.
>>> +      */
>>> +     if (!mo)
>>> +             return;
>>> +
>>> +     spin_lock(&mo->mn->lock);
>>> +     if (value)
>>> +             add_object(mo);
>>> +     else
>>> +             del_object(mo);
>>> +     spin_unlock(&mo->mn->lock);
>>> +}
>>> +
>>> +static struct mutex *__i915_mutex_lock_recursive(struct mutex *m)
>>> +{
>>> +     switch (mutex_trylock_recursive(m)) {
>>> +     default:
>>> +     case MUTEX_TRYLOCK_FAILED:
>>> +             mutex_lock(m);
> 
> I haven't yet seen a reason to, but this is probably best to be marked
> up as a nested lock just like the shrinker. So far, I haven't seen a
> caller from inside fs_reclaim, but the try_to_migrate (deadlock) paths are
> similar enough.

On struct mutex? How will that work?

> 
>>> +     case MUTEX_TRYLOCK_SUCCESS:
>>> +             return m;
>>> +
>>> +     case MUTEX_TRYLOCK_RECURSIVE:
>>> +             return ERR_PTR(-EEXIST); > +    }
>>>    }
>>>    
>>>    static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>> -                                                    struct mm_struct *mm,
>>> -                                                    unsigned long start,
>>> -                                                    unsigned long end,
>>> -                                                    bool blockable)
>>> +                                                   struct mm_struct *mm,
>>> +                                                   unsigned long start,
>>> +                                                   unsigned long end,
>>> +                                                   bool blockable)
>>>    {
>>>        struct i915_mmu_notifier *mn =
>>>                container_of(_mn, struct i915_mmu_notifier, mn);
>>> -     struct i915_mmu_object *mo;
>>>        struct interval_tree_node *it;
>>> -     LIST_HEAD(cancelled);
>>> +     struct mutex *unlock = NULL;
>>> +     int ret = 0;
>>>    
>>>        if (RB_EMPTY_ROOT(&mn->objects.rb_root))
>>>                return 0;
>>> @@ -133,11 +138,15 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>>        spin_lock(&mn->lock);
>>>        it = interval_tree_iter_first(&mn->objects, start, end);
>>>        while (it) {
>>> +             struct drm_i915_gem_object *obj;
>>> +
>>>                if (!blockable) {
>>> -                     spin_unlock(&mn->lock);
>>> -                     return -EAGAIN;
>>> +                     ret = -EAGAIN;
>>> +                     break;
>>>                }
>>> -             /* The mmu_object is released late when destroying the
>>> +
>>> +             /*
>>> +              * The mmu_object is released late when destroying the
>>>                 * GEM object so it is entirely possible to gain a
>>>                 * reference on an object in the process of being freed
>>>                 * since our serialisation is via the spinlock and not
>>> @@ -146,21 +155,33 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
>>>                 * use-after-free we only acquire a reference on the
>>>                 * object if it is not in the process of being destroyed.
>>>                 */
>>> -             mo = container_of(it, struct i915_mmu_object, it);
>>> -             if (kref_get_unless_zero(&mo->obj->base.refcount))
>>> -                     queue_work(mn->wq, &mo->work);
>>> -
>>> -             list_add(&mo->link, &cancelled);
>>> -             it = interval_tree_iter_next(it, start, end);
>>> +             obj = container_of(it, struct i915_mmu_object, it)->obj;
>>> +             if (!kref_get_unless_zero(&obj->base.refcount)) {
>>> +                     it = interval_tree_iter_next(it, start, end);
>>> +                     continue;
>>> +             }
>>> +             spin_unlock(&mn->lock); > +
>>> +             if (!unlock)
>>> +                     unlock = __i915_mutex_lock_recursive(&mn->mm->i915->drm.struct_mutex);
>>> +             ret = i915_gem_object_unbind(obj);
>>> +             if (ret == 0)
>>> +                     ret = __i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
>>
>> When gup_worker has been scheduled but not executed, we'll have object
>> in the interval tree with pages == -EINVAL. __i915_gem_object_put_pages
>> will therefore skip the call user_ptr->put_pages which would cancel the
>> worker. So canceling the worker seems will be missed under these
>> circumstances.
>>
>> I think at least, if I haven't lost myself in the flows here..
> 
> We don't set obj->mm.pages = -EINVAL on scheduling. It remains unset
> until after the worker completes so that we don't report the pages
> beforehand. Any error set in mm.pages remains set for further callers of
> get_pages to report; until we clear the object in a put_pages.

True, my bad.

>> I am also worried about the lock dropping a few lines above. It's
>> definitely correct to restart every time, but can we get in a situation
>> where we never exit the loop?
> 
> That would require a user with the mmap_sem for writing as we hold the
> mmap_sem for reading, afaict.
> 
>> Maybe not even because new objects are
>> coming along, but because for some reason we wouldn't be deleting all
>> objects. Because AFAICT we can only exit the loop when all intervals
>> have been deleted or an error happened on some object.
> 
> If we cannot remove an object, we report an error.
> 
>> Maybe
>> kref_get_unless_zero failing? That could be RCU period (long) and this
>> loop would essentially busy loop for that time.
> 
> The 0-ref objects are skipped; which itself is dubious but one can hand
> wave that the shadow PTE will not be used after this call to
> invalidate_range.

True, patch is not dropping the lock in that case. So eventually if such 
kind is all that is left, we exit the loop.

Okay I am happy with this aspect now. But overall I am unsure if this 
fixes things for us. We removed the deadlock chain via the workqueue, 
but we now lock the mutex directly. So why it will not be the same 
problem just with a shorter lockdep chain?

Regards,

Tvrtko


More information about the Intel-gfx mailing list