[Intel-gfx] [PATCH 21/29] drm/i915: Switch back to an array of logical per-engine HW contexts

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Apr 11 13:05:19 UTC 2019


On 10/04/2019 17:18, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-04-10 16:32:18)
>>
>> On 08/04/2019 10:17, Chris Wilson wrote:
>>> diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> index 3f794bc71958..0df3c5238c04 100644
>>> --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
>>> @@ -716,7 +716,7 @@ static int pin_context(struct i915_gem_context *ctx,
>>>        struct intel_context *ce;
>>>        int err;
>>>    
>>> -     ce = intel_context_instance(ctx, engine);
>>> +     ce = i915_gem_context_get_engine(ctx, engine->id);
>>
>> Staying with intel_context_instance wouldn't help to reduce the churn?
> 
> But it takes the GEM context :|
> 
> intel_context_lookup() ? But it won't be part of gt/intel_context.h
> And I'd like to have 'get' in there for consistency (although other
> object lookup functions return a new reference, so that may not be a
> solid argument).
> 
> It has annoyed me that this does require the GEM context, but that's the
> nature of the beast.

Leave it as is then.

>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>>> index 50266e87c225..21b4a04c424b 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>> @@ -4312,8 +4312,9 @@ int i915_gem_init_hw(struct drm_i915_private *dev_priv)
>>>    
>>>    static int __intel_engines_record_defaults(struct drm_i915_private *i915)
>>>    {
>>> -     struct i915_gem_context *ctx;
>>>        struct intel_engine_cs *engine;
>>> +     struct i915_gem_context *ctx;
>>> +     struct i915_gem_engines *e;
>>>        enum intel_engine_id id;
>>>        int err = 0;
>>>    
>>> @@ -4330,18 +4331,26 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
>>>        if (IS_ERR(ctx))
>>>                return PTR_ERR(ctx);
>>>    
>>> +     e = i915_gem_context_engine_list_lock(ctx);
>>> +
>>>        for_each_engine(engine, i915, id) {
>>
>> Do we need i915 version of for_each_context_engine? If all call sites
>> will doing this "lock ctx" -> "walk physical engines" -> "lookup in
>> context" then it seems a bit disconnected.
> 
> It's rare, a couple of odd cases imo.

Ok, can re-evaluate at the end.

>>> +             struct intel_context *ce = e->engines[id];
>>
>> How will index by engine->id work for engine map?
> 
> It doesn't, that's the point of these being the odd cases. :)
> 
>>>                struct i915_request *rq;
>>>    
>>> -             rq = i915_request_alloc(engine, ctx);
>>> +             err = intel_context_pin(ce);
>>> +             if (err)
>>> +                     goto err_active;
>>> +
>>> +             rq = i915_request_create(ce);
>>> +             intel_context_unpin(ce);
>>
>> Kind of verbose, no? Do you want to add some
>> middle-ground-between-request-alloc-and-create helper?
> 
> There's 2 callers of i915_request_create() in this style.
> (Execbuf has a style all of its own.)
> 
> At the moment I don't have a good name for the happy middle ground, so
> I'm willing to pay the price for forcing control over the pin to the
> user.
> 
> ...
> 
> Does it really make any difference for the perma-pinned kernel contexts?
> Actually no...
> 
> Hmm. The fundamental objective was being able to pass ce and avoid
> struct_mutex -- but we already avoid struct_mutex for pinning today as
> the context is already pinned.
> 
> The downside is that it adds redundant steps to execbuf, and
> __i915_request_create() is already taken... And I hope you would say no
> if I suggest ____i915_request_create :)
> 
>>> +static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
>>> +{
>>> +     struct intel_engine_cs *engine;
>>> +     struct i915_gem_engines *e;
>>> +     enum intel_engine_id id;
>>> +
>>> +     e = kzalloc(struct_size(e, engines, I915_NUM_ENGINES), GFP_KERNEL);
>>> +     if (!e)
>>> +             return ERR_PTR(-ENOMEM);
>>> +
>>> +     e->i915 = ctx->i915;
>>> +     for_each_engine(engine, ctx->i915, id) {
>>> +             struct intel_context *ce;
>>>    
>>> +             ce = intel_context_create(ctx, engine);
>>> +             if (IS_ERR(ce)) {
>>> +                     free_engines_n(e, id);
>>
>> I dislike piggy-back of n into e. How about:
>>
>> __free_engines(e, n)
>> {
>>          ...
>> }
>>
>> free_engines(e)
>> {
>>          __fre_engines(e, e->num_engines):
>> }
>>
>> ?
> 
> Ok.
>   
>> Or even you could e->num_engines++ in the above loop and just have one
>> free_engines.
> 
> I thought it was cleaner to avoid having multiple counters for the same
> loop. free_engines_n() ended up with 5 users.
> 
>>> +                     return ERR_CAST(ce);
>>> +             }
>>> +
>>> +             e->engines[id] = ce;
>>
>> Each context would have a sparse array of engines, on most platforms.
>> Would it be workable to instead create a compact array per context, and
>> just have a per device translation table of idx to engine->id? Or
>> vice-versa, I can't figure out straight from the bat which one would you
>> need.
> 
> As sparse as we do today. I working under the assumption that going
> forwards, the default map would be the oddity. But that is better than
> the sparse fixed array[] we previously embedded into the GEM context, so
> overall I think still an improvement for old platforms.

True.

> We can trim the num_engines, i.e. we need only allocate [rcs0] on gen3
> as any id>rcs0 will become -EINVAL.
> 
> One plan I did have was to remove the sparse engine->id (since that
> should be an internal id), but then we end up with not being able to use
> static constants for our VCSx etc.

Best leave that for now.

>> I guess in this scheme you would need some translation as well to
>> support customised engine maps.. will see if I will spot that later in
>> the patch.
> 
> In the scheme in this patch, we just replace ctx->engines[] for the
> custom map with separate intel_contexts for each instance of a specific
> engine.
>
>>> -     rbtree_postorder_for_each_entry_safe(ce, next, &ctx->hw_contexts, node) {
>>> -             struct intel_engine_cs *engine = ce->engine;
>>> +     e = i915_gem_context_engine_list_lock(ctx);
>>> +     for (i = 0; i < e->num_engines; i++) {
>>> +             struct intel_context *ce = e->engines[i];
>>>                struct i915_request *rq;
>>>    
>>> -             if (!(engine->mask & engines))
>>> +             if (!ce || !(ce->engine->mask & engines))
>>
>> Definitely looks like a case for for_each_context_engine.
> 
> for_each_context_engine(ce, e, i)
> for_each_context_masked(ce, e, mask, i)
> 
> Hmm. I do regret for_each...() that require temporaries.
> 
> We can't stuff the lock in there, without a bit of work...
> 
> gem_for_each_context_engine(iter, ctx) {
> 	struct intel_context *ce = iter.context;
> 
> with
> for (init_iter(&iter, ctx);; next_iter(&iter))
> 
> Just to hide the locking. Probably worth it.

Don't go to wild! :) We got used to temporaries so could also clean all 
of them later if too bored.

>>> +                     continue;
>>> +
>>> +             if (!intel_context_is_pinned(ce))
>>>                        continue;
>>>    
>>>                if (I915_SELFTEST_ONLY(context_barrier_inject_fault &
>>> -                                    engine->mask)) {
>>> +                                    ce->engine->mask)) {
>>>                        err = -ENXIO;
>>>                        break;
>>>                }
>>>    
>>> -             rq = i915_request_alloc(engine, ctx);
>>> +             err = intel_context_pin(ce);
>>> +             if (err)
>>> +                     break;
>>> +
>>> +             rq = i915_request_create(ce);
>>> +             intel_context_unpin(ce);
>>
>> Yep as mentioned somewhere above. I definitely think another helper
>> would help code base redability. Even if called unimaginatively as
>> __i915_request_add.
> 
> It is just these two (based on a quick grep).
> 
> Maybe intel_context_create_request() ?
> 
> Hmm, but isn't that what i915_request_create() is. Quiet!

Why not __i915_request_add? i915_request_add would then be just wedged 
check calling __i915_request_add, no? (Going from memory.. might not be 
reliable.)

>>> +struct i915_gem_engines {
>>> +     struct rcu_work rcu;
>>> +     struct drm_i915_private *i915;
>>> +     unsigned long num_engines;
>>
>> unsigned int?
> 
> Pointer before, array of pointers after, left an unsigned long hole.
> 
> I was just filling the space.

Well long makes me think there's a reason int is not enough. Which in 
this case there isn't. So I would still go for an int regardless of the 
hole or not. There is nothing to be gained by filling space. Could even 
be worse if some instructions expand to longer opcodes. :)

Regards,

Tvrtko


More information about the Intel-gfx mailing list