[Intel-gfx] [PATCH 29/38] drm/i915: Move over to intel_context_lookup()

Chris Wilson chris at chris-wilson.co.uk
Tue Mar 5 17:10:25 UTC 2019


Quoting Tvrtko Ursulin (2019-03-05 17:01:09)
> 
> On 01/03/2019 14:03, Chris Wilson wrote:
> > @@ -249,12 +249,8 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
> >   
> >       kfree(ctx->engines);
> >   
> > -     for (n = 0; n < ARRAY_SIZE(ctx->__engine); n++) {
> > -             struct intel_context *ce = &ctx->__engine[n];
> > -
> > -             if (ce->ops)
> > -                     ce->ops->destroy(ce);
> > -     }
> 
> /*
>   * No need to take lock since no one can access the context at this
>   * point.
>   */
> 
> ?

I hope that's self-evident from being the free function. Hmm, perhaps
not if we move to a HW context centric viewpoint. Maybe worth revisiting
in the future when GEM context withers away.

> > @@ -1839,13 +1817,15 @@ static int get_sseu(struct i915_gem_context *ctx,
> >       if (!engine)
> >               return -EINVAL;
> >   
> > +     ce = intel_context_instance(ctx, engine);
> > +     if (IS_ERR(ce))
> > +             return PTR_ERR(ce);
> 
> Interesting! Nevermind..

Wait for the next patch, that one is just for you :)

> > +void __intel_context_remove(struct intel_context *ce)
> > +{
> > +     struct i915_gem_context *ctx = ce->gem_context;
> > +
> > +     spin_lock(&ctx->hw_contexts_lock);
> > +     rb_erase(&ce->node, &ctx->hw_contexts);
> > +     spin_unlock(&ctx->hw_contexts_lock);
> > +}
> 
> Gets used in a later patch?

For the embedded context within virtual_engine.

> > +struct intel_context *
> > +intel_context_instance(struct i915_gem_context *ctx,
> > +                    struct intel_engine_cs *engine)
> > +{
> > +     struct intel_context *ce, *pos;
> > +
> > +     ce = intel_context_lookup(ctx, engine);
> > +     if (likely(ce))
> > +             return ce;
> > +
> > +     ce = kzalloc(sizeof(*ce), GFP_KERNEL);
> 
> At some point you'll move this to global slab?

Can do so now, I didn't think these would be frequent enough to justify
a slab. We expect a few hundred on a system; whereas we expect
thousands, tens of thousands of requests and objects.

Still slabs have secondary benefits of debugging allocation patterns.

> > diff --git a/drivers/gpu/drm/i915/intel_guc_ads.c b/drivers/gpu/drm/i915/intel_guc_ads.c
> > index f0db62887f50..da220561ac41 100644
> > --- a/drivers/gpu/drm/i915/intel_guc_ads.c
> > +++ b/drivers/gpu/drm/i915/intel_guc_ads.c
> > @@ -121,8 +121,8 @@ int intel_guc_ads_create(struct intel_guc *guc)
> >        * to find it. Note that we have to skip our header (1 page),
> >        * because our GuC shared data is there.
> >        */
> > -     kernel_ctx_vma = to_intel_context(dev_priv->kernel_context,
> > -                                       dev_priv->engine[RCS])->state;
> > +     kernel_ctx_vma = intel_context_lookup(dev_priv->kernel_context,
> > +                                           dev_priv->engine[RCS])->state;
> >       blob->ads.golden_context_lrca =
> >               intel_guc_ggtt_offset(guc, kernel_ctx_vma) + skipped_offset;
> 
> Scary moment but kernel context is perma pinned, ok.

Scary moment, but its guc.

> > diff --git a/drivers/gpu/drm/i915/intel_guc_submission.c b/drivers/gpu/drm/i915/intel_guc_submission.c
> > index 119d3326bb5e..4935e0555135 100644
> > --- a/drivers/gpu/drm/i915/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/intel_guc_submission.c
> > @@ -382,7 +382,7 @@ static void guc_stage_desc_init(struct intel_guc_client *client)
> >       desc->db_id = client->doorbell_id;
> >   
> >       for_each_engine_masked(engine, dev_priv, client->engines, tmp) {
> > -             struct intel_context *ce = to_intel_context(ctx, engine);
> > +             struct intel_context *ce = intel_context_lookup(ctx, engine);
> 
> This one seems to be handling !ce->state a bit below. So it feels you 
> have to add the !ce test as well.

I'm not allowed to explode here, shame.
-Chris


More information about the Intel-gfx mailing list