[PATCH drm-misc-next v7 5/7] drm/gpuvm: track/lock/validate external/evicted objects

Danilo Krummrich dakr at redhat.com
Tue Oct 31 16:41:37 UTC 2023


On 10/31/23 12:34, Thomas Hellström wrote:
> On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote:
>> Currently the DRM GPUVM offers common infrastructure to track GPU VA
>> allocations and mappings, generically connect GPU VA mappings to
>> their
>> backing buffers and perform more complex mapping operations on the
>> GPU VA
>> space.
>>
>> However, there are more design patterns commonly used by drivers,
>> which
>> can potentially be generalized in order to make the DRM GPUVM
>> represent
>> a basis for GPU-VM implementations. In this context, this patch aims
>> at generalizing the following elements.
>>
>> 1) Provide a common dma-resv for GEM objects not being used outside
>> of
>>     this GPU-VM.
>>
>> 2) Provide tracking of external GEM objects (GEM objects which are
>>     shared with other GPU-VMs).
>>
>> 3) Provide functions to efficiently lock all GEM objects dma-resv the
>>     GPU-VM contains mappings of.
>>
>> 4) Provide tracking of evicted GEM objects the GPU-VM contains
>> mappings
>>     of, such that validation of evicted GEM objects is accelerated.
>>
>> 5) Provide some convinience functions for common patterns.
>>
>> Big thanks to Boris Brezillon for his help to figure out locking for
>> drivers updating the GPU VA space within the fence signalling path.
>>
>> Suggested-by: Matthew Brost <matthew.brost at intel.com>
>> Signed-off-by: Danilo Krummrich <dakr at redhat.com>
> 
> The checkpatch.pl warning still persists:
> WARNING: ENOTSUPP is not a SUSV4 error code, prefer EOPNOTSUPP
> #627: FILE: drivers/gpu/drm/drm_gpuvm.c:1347:
> +		return -ENOTSUPP;

Hm, I thought I changed this one. Seems like it slipped through. Gonna
fix that.

> 
>> ---
>>   drivers/gpu/drm/drm_gpuvm.c | 633
>> ++++++++++++++++++++++++++++++++++++
>>   include/drm/drm_gpuvm.h     | 250 ++++++++++++++
>>   2 files changed, 883 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gpuvm.c
>> b/drivers/gpu/drm/drm_gpuvm.c
>> index 7f4f5919f84c..01cbeb98755a 100644
>> --- a/drivers/gpu/drm/drm_gpuvm.c
>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>> @@ -82,6 +82,21 @@
>>    * &drm_gem_object list of &drm_gpuvm_bos for an existing instance
>> of this
>>    * particular combination. If not existent a new instance is created
>> and linked
>>    * to the &drm_gem_object.
>> + *
>> + * &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm,
>> are also used
>> + * as entry for the &drm_gpuvm's lists of external and evicted
>> objects. Those
>> + * lists are maintained in order to accelerate locking of dma-resv
>> locks and
>> + * validation of evicted objects bound in a &drm_gpuvm. For
>> instance, all
>> + * &drm_gem_object's &dma_resv of a given &drm_gpuvm can be locked
>> by calling
>> + * drm_gpuvm_exec_lock(). Once locked drivers can call
>> drm_gpuvm_validate() in
>> + * order to validate all evicted &drm_gem_objects. It is also
>> possible to lock
>> + * additional &drm_gem_objects by providing the corresponding
>> parameters to
>> + * drm_gpuvm_exec_lock() as well as open code the &drm_exec loop
>> while making
>> + * use of helper functions such as drm_gpuvm_prepare_range() or
>> + * drm_gpuvm_prepare_objects().
>> + *
>> + * Every bound &drm_gem_object is treated as external object when
>> its &dma_resv
>> + * structure is different than the &drm_gpuvm's common &dma_resv
>> structure.
>>    */
>>   
>>   /**
>> @@ -429,6 +444,20 @@
>>    * Subsequent calls to drm_gpuvm_bo_obtain() for the same &drm_gpuvm
>> and
>>    * &drm_gem_object must be able to observe previous creations and
>> destructions
>>    * of &drm_gpuvm_bos in order to keep instances unique.
>> + *
>> + * The &drm_gpuvm's lists for keeping track of external and evicted
>> objects are
>> + * protected against concurrent insertion / removal and iteration
>> internally.
>> + *
>> + * However, drivers still need ensure to protect concurrent calls to
>> functions
>> + * iterating those lists, namely drm_gpuvm_prepare_objects() and
>> + * drm_gpuvm_validate().
>> + *
>> + * Alternatively, drivers can set the &DRM_GPUVM_RESV_PROTECTED flag
>> to indicate
>> + * that the corresponding &dma_resv locks are held in order to
>> protect the
>> + * lists. If &DRM_GPUVM_RESV_PROTECTED is set, internal locking is
>> disabled and
>> + * the corresponding lockdep checks are enabled. This is an
>> optimization for
>> + * drivers which are capable of taking the corresponding &dma_resv
>> locks and
>> + * hence do not require internal locking.
>>    */
>>   
>>   /**
>> @@ -641,6 +670,201 @@
>>    *     }
>>    */
>>   
>> +/**
>> + * get_next_vm_bo_from_list() - get the next vm_bo element
>> + * @__gpuvm: the &drm_gpuvm
>> + * @__list_name: the name of the list we're iterating on
>> + * @__local_list: a pointer to the local list used to store already
>> iterated items
>> + * @__prev_vm_bo: the previous element we got from
>> get_next_vm_bo_from_list()
>> + *
>> + * This helper is here to provide lockless list iteration. Lockless
>> as in, the
>> + * iterator releases the lock immediately after picking the first
>> element from
>> + * the list, so list insertion deletion can happen concurrently.
>> + *
>> + * Elements popped from the original list are kept in a local list,
>> so removal
>> + * and is_empty checks can still happen while we're iterating the
>> list.
>> + */
>> +#define get_next_vm_bo_from_list(__gpuvm, __list_name, __local_list,
>> __prev_vm_bo)     \
>> +       ({
>>                     \
>> +               struct drm_gpuvm_bo *__vm_bo =
>> NULL;                                    \
>> +
>>                     \
>> +               drm_gpuvm_bo_put(__prev_vm_bo);
>>                     \
>> +
>>                     \
>> +               spin_lock(&(__gpuvm)-
>>> __list_name.lock);                                \
>> +               if (!(__gpuvm)-
>>> __list_name.local_list)                                 \
>> +                       (__gpuvm)->__list_name.local_list =
>> __local_list;               \
>> +               else
>>                     \
>> +                       drm_WARN_ON((__gpuvm)-
>>> drm,                                     \
>> +                                   (__gpuvm)->__list_name.local_list
>> != __local_list); \
>> +
>>                     \
>> +               while (!list_empty(&(__gpuvm)->__list_name.list))
>> {                     \
>> +                       __vm_bo = list_first_entry(&(__gpuvm)-
>>> __list_name.list,        \
>> +                                                  struct
>> drm_gpuvm_bo,                 \
>> +
>> list.entry.__list_name);             \
>> +                       if (kref_get_unless_zero(&__vm_bo->kref))
>> {                     \
>> +                               list_move_tail(&(__vm_bo)-
>>> list.entry.__list_name,      \
>> +
>> __local_list);                           \
>> +                               break;
>>                     \
>> +                       } else
>> {                                                        \
>> +                               list_del_init(&(__vm_bo)-
>>> list.entry.__list_name);      \
>> +                               __vm_bo =
>> NULL;                                         \
>> +                       }
>>                     \
>> +               }
>>                     \
>> +               spin_unlock(&(__gpuvm)-
>>> __list_name.lock);                              \
>> +
>>                     \
>> +               __vm_bo;
>>                     \
>> +       })
>> +
>> +/**
>> + * for_each_vm_bo_in_list() - internal vm_bo list iterator
>> + * @__gpuvm: the &drm_gpuvm
>> + * @__list_name: the name of the list we're iterating on
>> + * @__local_list: a pointer to the local list used to store already
>> iterated items
>> + * @__vm_bo: the struct drm_gpuvm_bo to assign in each iteration
>> step
>> + *
>> + * This helper is here to provide lockless list iteration. Lockless
>> as in, the
>> + * iterator releases the lock immediately after picking the first
>> element from the
>> + * list, hence list insertion and deletion can happen concurrently.
>> + *
>> + * It is not allowed to re-assign the vm_bo pointer from inside this
>> loop.
>> + *
>> + * Typical use:
>> + *
>> + *     struct drm_gpuvm_bo *vm_bo;
>> + *     LIST_HEAD(my_local_list);
>> + *
>> + *     ret = 0;
>> + *     for_each_vm_bo_in_list(gpuvm, <list_name>, &my_local_list,
>> vm_bo) {
>> + *             ret = do_something_with_vm_bo(..., vm_bo);
>> + *             if (ret)
>> + *                     break;
>> + *     }
>> + *     // Drop ref in case we break out of the loop.
>> + *     drm_gpuvm_bo_put(vm_bo);
>> + *     restore_vm_bo_list(gpuvm, <list_name>, &my_local_list);
>> + *
>> + *
>> + * Only used for internal list iterations, not meant to be exposed
>> to the outside
>> + * world.
>> + */
>> +#define for_each_vm_bo_in_list(__gpuvm, __list_name, __local_list,
>> __vm_bo)    \
>> +       for (__vm_bo = get_next_vm_bo_from_list(__gpuvm,
>> __list_name,           \
>> +                                               __local_list,
>> NULL);            \
>> +
>> __vm_bo;                                                           \
>> +            __vm_bo = get_next_vm_bo_from_list(__gpuvm,
>> __list_name,           \
>> +                                               __local_list,
>> __vm_bo))
>> +
>> +static void
>> +__restore_vm_bo_list(struct drm_gpuvm *gpuvm, spinlock_t *lock,
>> +                    struct list_head *list, struct list_head
>> **local_list)
>> +{
>> +       /* Merge back the two lists, moving local list elements to
>> the
>> +        * head to preserve previous ordering, in case it matters.
>> +        */
>> +       spin_lock(lock);
>> +       if (*local_list) {
>> +               list_splice(*local_list, list);
>> +               *local_list = NULL;
>> +       }
>> +       spin_unlock(lock);
>> +}
>> +
>> +/**
>> + * restore_vm_bo_list() - move vm_bo elements back to their original
>> list
>> + * @__gpuvm: the &drm_gpuvm
>> + * @__list_name: the name of the list we're iterating on
>> + *
>> + * When we're done iterating a vm_bo list, we should call
>> restore_vm_bo_list()
>> + * to restore the original state and let new iterations take place.
>> + */
>> +#define restore_vm_bo_list(__gpuvm,
>> __list_name)                       \
>> +       __restore_vm_bo_list((__gpuvm), &(__gpuvm)-
>>> __list_name.lock,   \
>> +                            &(__gpuvm)-
>>> __list_name.list,              \
>> +                            &(__gpuvm)->__list_name.local_list)
>> +
>> +static void
>> +cond_spin_lock(spinlock_t *lock, bool cond)
>> +{
>> +       if (cond)
>> +               spin_lock(lock);
>> +}
>> +
>> +static void
>> +cond_spin_unlock(spinlock_t *lock, bool cond)
>> +{
>> +       if (cond)
>> +               spin_unlock(lock);
>> +}
>> +
>> +static void
>> +__drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinlock_t *lock,
>> +                       struct list_head *entry, struct list_head
>> *list)
>> +{
>> +       cond_spin_lock(lock, !!lock);
>> +       if (list_empty(entry))
>> +               list_add_tail(entry, list);
>> +       cond_spin_unlock(lock, !!lock);
>> +}
>> +
>> +/**
>> + * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
>> + * @__vm_bo: the &drm_gpuvm_bo
>> + * @__list_name: the name of the list to insert into
>> + * @__lock: whether to lock with the internal spinlock
>> + *
>> + * Inserts the given @__vm_bo into the list specified by
>> @__list_name.
>> + */
>> +#define drm_gpuvm_bo_list_add(__vm_bo, __list_name,
>> __lock)                    \
>> +       __drm_gpuvm_bo_list_add((__vm_bo)-
>>> vm,                                  \
>> +                               __lock ? &(__vm_bo)->vm-
>>> __list_name.lock :     \
>> +
>> NULL,                                  \
>> +                               &(__vm_bo)-
>>> list.entry.__list_name,             \
>> +                               &(__vm_bo)->vm->__list_name.list)
>> +
>> +static void
>> +__drm_gpuvm_bo_list_del(struct drm_gpuvm *gpuvm, spinlock_t *lock,
>> +                       struct list_head *entry, bool init)
>> +{
>> +       cond_spin_lock(lock, !!lock);
>> +       if (init) {
>> +               if (!list_empty(entry))
>> +                       list_del_init(entry);
>> +       } else {
>> +               list_del(entry);
>> +       }
>> +       cond_spin_unlock(lock, !!lock);
>> +}
>> +
>> +/**
>> + * drm_gpuvm_bo_list_del_init() - remove a vm_bo from the given list
>> + * @__vm_bo: the &drm_gpuvm_bo
>> + * @__list_name: the name of the list to insert into
>> + * @__lock: whether to lock with the internal spinlock
>> + *
>> + * Removes the given @__vm_bo from the list specified by
>> @__list_name.
>> + */
>> +#define drm_gpuvm_bo_list_del_init(__vm_bo, __list_name,
>> __lock)               \
>> +       __drm_gpuvm_bo_list_del((__vm_bo)-
>>> vm,                                  \
>> +                               __lock ? &(__vm_bo)->vm-
>>> __list_name.lock :     \
>> +
>> NULL,                                  \
>> +                               &(__vm_bo)-
>>> list.entry.__list_name,             \
>> +                               true)
>> +
>> +/**
>> + * drm_gpuvm_bo_list_del() - remove a vm_bo from the given list
>> + * @__vm_bo: the &drm_gpuvm_bo
>> + * @__list_name: the name of the list to insert into
>> + * @__lock: whether to lock with the internal spinlock
>> + *
>> + * Removes the given @__vm_bo from the list specified by
>> @__list_name.
>> + */
>> +#define drm_gpuvm_bo_list_del(__vm_bo, __list_name,
>> __lock)                    \
>> +       __drm_gpuvm_bo_list_del((__vm_bo)-
>>> vm,                                  \
>> +                               __lock ? &(__vm_bo)->vm-
>>> __list_name.lock :     \
>> +
>> NULL,                                  \
>> +                               &(__vm_bo)-
>>> list.entry.__list_name,             \
>> +                               false)
>> +
>>   #define to_drm_gpuva(__node)   container_of((__node), struct
>> drm_gpuva, rb.node)
>>   
>>   #define GPUVA_START(node) ((node)->va.addr)
>> @@ -763,6 +987,12 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const
>> char *name,
>>          gpuvm->rb.tree = RB_ROOT_CACHED;
>>          INIT_LIST_HEAD(&gpuvm->rb.list);
>>   
>> +       INIT_LIST_HEAD(&gpuvm->extobj.list);
>> +       spin_lock_init(&gpuvm->extobj.lock);
>> +
>> +       INIT_LIST_HEAD(&gpuvm->evict.list);
>> +       spin_lock_init(&gpuvm->evict.lock);
>> +
>>          gpuvm->name = name ? name : "unknown";
>>          gpuvm->flags = flags;
>>          gpuvm->ops = ops;
>> @@ -805,10 +1035,352 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>>          drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>>                   "GPUVA tree is not empty, potentially leaking
>> memory.\n");
>>   
>> +       drm_WARN(gpuvm->drm, !list_empty(&gpuvm->extobj.list),
>> +                "Extobj list should be empty.\n");
>> +       drm_WARN(gpuvm->drm, !list_empty(&gpuvm->evict.list),
>> +                "Evict list should be empty.\n");
>> +
>>          drm_gem_object_put(gpuvm->r_obj);
>>   }
>>   EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>>   
>> +static int
>> +__drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm,
>> +                           struct drm_exec *exec,
>> +                           unsigned int num_fences)
>> +{
>> +       struct drm_gpuvm_bo *vm_bo;
>> +       LIST_HEAD(extobjs);
>> +       int ret = 0;
>> +
>> +       for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) {
>> +               ret = drm_exec_prepare_obj(exec, vm_bo->obj,
>> num_fences);
>> +               if (ret)
>> +                       break;
>> +       }
>> +       /* Drop ref in case we break out of the loop. */
>> +       drm_gpuvm_bo_put(vm_bo);
>> +       restore_vm_bo_list(gpuvm, extobj);
>> +
>> +       return ret;
>> +}
>> +
>> +static int
>> +drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
>> +                                struct drm_exec *exec,
>> +                                unsigned int num_fences)
>> +{
>> +       struct drm_gpuvm_bo *vm_bo;
>> +       int ret = 0;
>> +
>> +       drm_gpuvm_resv_assert_held(gpuvm);
>> +       list_for_each_entry(vm_bo, &gpuvm->extobj.list,
>> list.entry.extobj) {
>> +               ret = drm_exec_prepare_obj(exec, vm_bo->obj,
>> num_fences);
>> +               if (ret)
>> +                       break;
>> +
>> +               if (vm_bo->evicted)
>> +                       drm_gpuvm_bo_list_add(vm_bo, evict, false);
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +/**
>> + * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs
>> + * @gpuvm: the &drm_gpuvm
>> + * @exec: the &drm_exec locking context
>> + * @num_fences: the amount of &dma_fences to reserve
>> + *
>> + * Calls drm_exec_prepare_obj() for all &drm_gem_objects the given
>> + * &drm_gpuvm contains mappings of.
>> + *
>> + * Using this function directly, it is the drivers responsibility to
>> call
>> + * drm_exec_init() and drm_exec_fini() accordingly.
>> + *
>> + * Note: This function is safe against concurrent insertion and
>> removal of
>> + * external objects, however it is not safe against concurrent usage
>> itself.
>> + *
>> + * Drivers need to make sure to protect this case with either an
>> outer VM lock
>> + * or by calling drm_gpuvm_prepare_vm() before this function within
>> the
>> + * drm_exec_until_all_locked() loop, such that the GPUVM's dma-resv
>> lock ensures
>> + * mutual exclusion.
>> + *
>> + * Returns: 0 on success, negative error code on failure.
> 
> s/Returns:/Return:/g
> 
> Otherwise LGTM.

Sounds like you want to offer your RB? :)

> 
> /Thomas
> 
> 



More information about the dri-devel mailing list