[Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Jul 9 10:52:19 UTC 2020


On 08/07/2020 19:08, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-07-08 17:54:51)
>>
>> On 06/07/2020 07:19, Chris Wilson wrote:
>>> @@ -662,18 +692,22 @@ static int eb_reserve(struct i915_execbuffer *eb)
>>>         * room for the earlier objects *unless* we need to defragment.
>>>         */
>>>    
>>> -     if (mutex_lock_interruptible(&eb->i915->drm.struct_mutex))
>>> -             return -EINTR;
>>> -
>>>        pass = 0;
>>>        do {
>>> +             int err = 0;
>>> +
>>> +             if (mutex_lock_interruptible(&eb->i915->drm.struct_mutex))
>>> +                     return -EINTR;
>>
>> Recently you explained to me why we still use struct mutex here so
>> maybe, while moving the code, document that in a comment.
> 
> Part of the work here is to eliminate the need for the struct_mutex,
> that will be replaced by not dropping the vm->mutex while binding
> multiple vma.
> 
> It's the interaction with the waits to flush other vm users when under
> pressure that are the most annoying. This area is not straightforward,
> and at least deserves some comments so that the thinking behind it can
> be fixed.
> 
>>> +static struct i915_request *
>>> +nested_request_create(struct intel_context *ce)
>>> +{
>>> +     struct i915_request *rq;
>>> +
>>> +     /* XXX This only works once; replace with shared timeline */
>>
>> Once as in attempt to use the same local intel_context from another eb
>> would upset lockdep? It's not a problem I think.
> 
> "Once" as in this is the only time we can do this nested locking between
> engines of the same context in the whole driver, or else lockdep would
> have been right to complain. [i.e. if we ever do the reserve nesting, we
> are screwed.]
> 
> Fwiw, I have posted patches that will eliminate the need for a nested
> timeline here :)

In this series or just on the mailing list?

> 
>>> +     mutex_lock_nested(&ce->timeline->mutex, SINGLE_DEPTH_NESTING);
>>> +     intel_context_enter(ce);
> 
> 
>>>    static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>>    {
>>>        struct intel_timeline *tl;
>>> @@ -2087,9 +2174,7 @@ static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>>        intel_context_enter(ce);
>>>        rq = eb_throttle(ce);
>>>    
>>> -     intel_context_timeline_unlock(tl);
>>> -
>>> -     if (rq) {
>>> +     while (rq) {
>>>                bool nonblock = eb->file->filp->f_flags & O_NONBLOCK;
>>>                long timeout;
>>>    
>>> @@ -2097,23 +2182,34 @@ static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>>                if (nonblock)
>>>                        timeout = 0;
>>>    
>>> +             mutex_unlock(&tl->mutex);
>>
>> "Don't drop the timeline lock during execbuf"? Is the "during execbuf"
>> actually a smaller subset
> 
> We are before execbuf in my book :)
> 
> This is throttle the hog before we start, and reserve enough space in
> the ring (we make sure there's a page, or thereabouts) to build a batch
> without interruption.

Ok. :)

Regards,

Tvrtko


More information about the Intel-gfx mailing list