[Intel-gfx] [PATCH 25/33] drm/i915: Move object close under its own lock
Chris Wilson
chris at chris-wilson.co.uk
Wed May 22 14:47:32 UTC 2019
Quoting Mika Kuoppala (2019-05-22 15:32:34)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> > -static void free_engines_rcu(struct work_struct *wrk)
> > +static void free_engines_rcu(struct rcu_head *rcu)
> > {
> > - struct i915_gem_engines *e =
> > - container_of(wrk, struct i915_gem_engines, rcu.work);
> > - struct drm_i915_private *i915 = e->i915;
> > -
> > - mutex_lock(&i915->drm.struct_mutex);
> > - free_engines(e);
> > - mutex_unlock(&i915->drm.struct_mutex);
> > + free_engines(container_of(rcu, struct i915_gem_engines, rcu));
> > }
> > @@ -1666,8 +1678,7 @@ set_engines(struct i915_gem_context *ctx,
> > rcu_swap_protected(ctx->engines, set.engines, 1);
> > mutex_unlock(&ctx->engines_mutex);
> >
> > - INIT_RCU_WORK(&set.engines->rcu, free_engines_rcu);
> > - queue_rcu_work(system_wq, &set.engines->rcu);
> > + call_rcu(&set.engines->rcu, free_engines_rcu);
>
> Why can we omit the queue now?
We only required the worker for acquiring struct_mutex. After the
removal, we can do the kref_put and intel_context destroy from softirq
context...
> > diff --git a/drivers/gpu/drm/i915/i915_timeline.c b/drivers/gpu/drm/i915/i915_timeline.c
> > index 5fbea0892f33..000e1a9b6750 100644
> > --- a/drivers/gpu/drm/i915/i915_timeline.c
> > +++ b/drivers/gpu/drm/i915/i915_timeline.c
> > @@ -61,7 +61,7 @@ hwsp_alloc(struct i915_timeline *timeline, unsigned int *cacheline)
> >
> > BUILD_BUG_ON(BITS_PER_TYPE(u64) * CACHELINE_BYTES > PAGE_SIZE);
> >
> > - spin_lock(>->hwsp_lock);
> > + spin_lock_irq(>->hwsp_lock);
>
> Why do we need this?
Because we can now reach other hwsp_lock callsites from softirq context.
We might be able to get away with _bh -- I may have overreacted to the
lockdep warning.
-Chris
More information about the Intel-gfx
mailing list