[Intel-gfx] [PATCH 13/21] drm/i915: Pull i915_vma_pin under the vm->mutex
Chris Wilson
chris at chris-wilson.co.uk
Thu Sep 19 14:05:56 UTC 2019
Quoting Tvrtko Ursulin (2019-09-19 14:37:01)
>
> On 17/09/2019 19:56, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-09-17 13:37:55)
> >> On 02/09/2019 05:02, Chris Wilson wrote:
> >>> +void i915_active_set_exclusive(struct i915_active *ref, struct dma_fence *f)
> >>> +{
> >>> + GEM_BUG_ON(i915_active_is_idle(ref));
> >>> +
> >>> + dma_fence_get(f);
> >>> +
> >>> + rcu_read_lock();
> >>> + if (rcu_access_pointer(ref->excl)) {
> >>> + struct dma_fence *old;
> >>> +
> >>> + old = dma_fence_get_rcu_safe(&ref->excl);
> >>> + if (old) {
> >>> + if (dma_fence_remove_callback(old, &ref->excl_cb))
> >>> + atomic_dec(&ref->count);
> >>> + dma_fence_put(old);
> >>> + }
> >>
> >> Put some commentary in describing the business done with ref->excl and
> >> callbacks. Or this goes away later?
> >
> > void i915_active_set_exclusive(struct i915_active *ref, struct dma_fence *f)
> > {
> > GEM_BUG_ON(i915_active_is_idle(ref));
> >
> > mutex_acquire(&ref->mutex.dep_map, 0, 0, _THIS_IP_);
> > if (!__i915_active_fence_set(&ref->excl, f))
> > atomic_inc(&ref->count);
> > mutex_release(&ref->mutex.dep_map, 0, _THIS_IP_);
> > }
>
> What is this?
What is becomes in a couple of patches time, where the entire
i915_active is built out of the same i915_active_fence (wrapping the
dma_fence_cb), and so this exclusive slot becomes more clearly a
preallocated version of one of the nodes (and not a special case, except
for the special case that the order is given by the caller and not our
mutex, which is a bit of a nuisance I admit).
> >>> @@ -109,20 +121,31 @@ int i915_gem_object_unbind(struct drm_i915_gem_object *obj,
> >>> LIST_HEAD(still_in_list);
> >>> int ret = 0;
> >>>
> >>> - lockdep_assert_held(&obj->base.dev->struct_mutex);
> >>> -
> >>> spin_lock(&obj->vma.lock);
> >>> while (!ret && (vma = list_first_entry_or_null(&obj->vma.list,
> >>> struct i915_vma,
> >>> obj_link))) {
> >>> + struct i915_address_space *vm = vma->vm;
> >>> +
> >>> + ret = -EBUSY;
> >>> + if (!i915_vm_tryopen(vm))
> >>> + break;
> >>
> >> This is some race between different paths of vm closing/destruction and
> >> vma cleanup? We need a comment about it somewhere.. Here I guess is a
> >> good place.
> >
> > 4 different paths all fighting; and coordinate by preventing the other 2
> > destruction paths from running as they run, and spinlocks to prevent the
> > other.
> >
> >> And why break and not continue, there will be a second call coming from
> >> somewhere when it fails?
> >
> > It failed to unbind the object, so it would end up in a busy spin
> > preventing the actual vm release from removing the vma.
>
> But it could proceed to unbind what can be unbound. Otherwise is it
> guaranteed to be called again when it fails, so eventually it will clean
> up everything?
We gain nothing by doing so, except unbind things that were in use. We
have to completely unbind the object for the callers to make progress.
> >>> + work->vma = vma;
> >>> + work->cache_level = cache_level;
> >>> + work->flags = bind_flags | I915_VMA_ALLOC;
> >>> +
> >>> + /*
> >>> + * Note we only want to chain up to the migration fence on
> >>> + * the pages (not the object itself). As we don't track that,
> >>> + * yet, we have to use the exclusive fence instead.
> >>> + *
> >>> + * Also note that we do not want to track the async vma as
> >>> + * part of the obj->resv->excl_fence as it only affects
> >>> + * execution and not content or object's backing store lifetime.
> >>> + */
> >>> + GEM_BUG_ON(i915_active_has_exclusive(&vma->active));
> >>> + i915_active_set_exclusive(&vma->active, &work->base.dma);
> >>
> >> Oh right, dma_fence_work since it's not a worker but callback on
> >> signalling the fence... From what context it gets called (holding any
> >> locks?) and which locks the callback can/will take?
> >
> > dma_fence_work has a dma_fence_cb that schedules a work_stuct which
> > executes our function. (And it also has a dma-fence through which we can
> > signal the completion/error of that function.)
>
> And this is to avoid some allocation somewhere?
The fence-work generally is to schedule some work that has to wait for a
set of fences and wants to signal fence upon completion. We are using it
here to do the allocation after we drop the mutex. (Which means we have
to protect that work itself from the shrinker, which is fun.)
> >>> +static bool try_fast_pin(struct i915_vma *vma, unsigned int flags)
> >>> {
> >>> - const unsigned int bound = atomic_read(&vma->flags);
> >>> - int ret;
> >>> + unsigned int bound;
> >>> + bool pinned = true;
> >>>
> >>> - lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
> >>> - GEM_BUG_ON((flags & (PIN_GLOBAL | PIN_USER)) == 0);
> >>> - GEM_BUG_ON((flags & PIN_GLOBAL) && !i915_vma_is_ggtt(vma));
> >>> + bound = atomic_read(&vma->flags);
> >>> + do {
> >>> + if (unlikely(flags & ~bound))
> >>> + return false;
> >>
> >> What is this checking for?
> >
> > If the requested binding (flags) is already available (bound).
> >
> >>> - if (WARN_ON(bound & I915_VMA_PIN_OVERFLOW)) {
> >>> - ret = -EBUSY;
> >>> - goto err_unpin;
> >>> + if (unlikely(bound & (I915_VMA_OVERFLOW | I915_VMA_ERROR)))
> >>> + return false;
> >>
> >> What will the caller try to do in these cases?
> >
> > Not do the early return.
>
> How it will recover from error or overflow was the question I was aiming
> at. But that looks to have been before I figured out what do you really
> mean by fast in this context. Fast/slow terminology is not the clearest
> here but I have no better suggestions at the moment.
Any hard condition is punted to the caller.
> >>> + if (err)
> >>> + goto err_unlock;
> >>> +
> >>> + if (!(bound & I915_VMA_BIND_MASK)) {
> >>> + err = i915_vma_insert(vma, size, alignment, flags);
> >>> + if (err)
> >>> + goto err_active;
> >>> +
> >>> + if (i915_is_ggtt(vma->vm))
> >>> + __i915_vma_set_map_and_fenceable(vma);
> >>> + }
> >>> +
> >>> + GEM_BUG_ON(!vma->pages);
> >>> + err = i915_vma_bind(vma,
> >>> + vma->obj ? vma->obj->cache_level : 0,
> >>> + flags, work);
> >>> + if (err)
> >>> + goto err_remove;
> >>> +
> >>> + /* There should only be at most 2 active bindings (user, global) */
> >>
> >> s/bindings/pins/ ? I thought one vma is one binding by definition.. so I
> >> am confused.
> >
> > Oh no, that would be far too simple. On gen6-7 & bsw, the vma is aliased.
> > One vma, two binding (PIN_USER -> ggtt->alias, PIN_GLOBAL -> ggtt).
>
> Okay I am not familiar with that and that's a problem when reviewing this.
>
> What fundamentally happens when someone tried to PIN_USER and so far vma
> had PIN_GLOBAL (or vice-versa) in aliasing ggtt world? I see that we go
> to the "slow" path. Just increases the pin count?
We use the same vma location (so no need to do i915_vma_insert) but we
need to bind it into the new GTT (i.e. update all the PTEs in that GTT).
> >> In general, even when I did not explicitly asked for a comment, places
> >> where things weren't clear to me should be considered to have them added
> >> and probably more.
> >
> > How else do I know what isn't clear to you... ;)
>
> Good, when can I expect a more annotated version?
Soon?
-Chris
More information about the Intel-gfx
mailing list