[Intel-gfx] [PATCH 17/37] drm/i915: Track vma activity per fence.context, not per engine
Chris Wilson
chris at chris-wilson.co.uk
Fri Jun 29 15:34:41 UTC 2018
Quoting Chris Wilson (2018-06-29 16:03:31)
> Quoting Tvrtko Ursulin (2018-06-29 15:54:02)
> >
> > On 29/06/2018 08:53, Chris Wilson wrote:
> > > In the next patch, we will want to be able to use more flexible request
> > > timelines that can hop between engines. From the vma pov, we can then
> > > not rely on the binding of this request to an engine and so can not
> > > ensure that different requests are ordered through a per-engine
> > > timeline, and so we must track activity of all timelines. (We track
> > > activity on the vma itself to prevent unbinding from HW before the HW
> > > has finished accessing it.)
> > >
> > > For now, let's just ignore the potential issue with trying to use 64b
> > > indices with radixtrees on 32b machines, it's unlikely to be a problem
> > > in practice...
>
> > > +static struct i915_gem_active *lookup_active(struct i915_vma *vma, u64 idx)
> > > +{
> > > + struct i915_vma_active *active;
> > > + int err;
> > > +
> > > + /*
> > > + * XXX Note that the radix_tree uses unsigned longs for it indices,
> > > + * a problem for us on i386 with 32bit longs. However, the likelihood
> > > + * of 2 timelines being used on the same VMA aliasing is minimal,
> > > + * and further reduced by that both timelines must be active
> > > + * simultaneously to confuse us.
> > > + */
> > > + active = radix_tree_lookup(&vma->active_rt, idx);
> > > + if (likely(active)) {
> > > + GEM_BUG_ON(i915_gem_active_isset(&active->base) &&
> > > + idx != i915_gem_active_peek(&active->base,
> > > + &vma->vm->i915->drm.struct_mutex)->fence.context);
> > > + return &active->base;
> > > + }
> > > +
> > > + active = kmalloc(sizeof(*active), GFP_KERNEL);
> > > + if (unlikely(!active))
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + init_request_active(&active->base, i915_vma_retire);
> > > + active->vma = vma;
> > > +
> > > + err = radix_tree_insert(&vma->active_rt, idx, active);
> > > + if (unlikely(err)) {
> > > + kfree(active);
> > > + return ERR_PTR(err);
> > > + }
> > > +
> > > + return &active->base;
> > > +}
> > > +
> > > +int i915_vma_move_to_active(struct i915_vma *vma,
> > > + struct i915_request *rq,
> > > + unsigned int flags)
> > > +{
> > > + struct drm_i915_gem_object *obj = vma->obj;
> > > + struct i915_gem_active *active;
> > > +
> > > + lockdep_assert_held(&rq->i915->drm.struct_mutex);
> > > + GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
> > > +
> > > + active = lookup_active(vma, rq->fence.context);
> >
> > Never mind the radix tree, but fence.context is u64 as well. And
> > assigned values are continuously incrementing so once >4G of contexts
> > are created and destroyed aliasing is guaranteed with the kernel
> > context, or any old one.
>
> As I said, I don't think it matters because you need to alias active
> fences and put a GEM_BUG_ON for you to hit. In the next patch, it
> becomes ever harder to hit.
>
> The alternative is yet another custom radixtree, and there is nothing
> preventing us substituting one radixtree implementation for another here.
Alternative to radixtree would be rbtree. With the cache that shouldn't
be too bad.
-Chris
More information about the Intel-gfx
mailing list