[Intel-gfx] [PATCH 7/7] drm/i915/gem: Acquire all vma/objects under reservation_ww_class

Christian König ckoenig.leichtzumerken at gmail.com
Thu Jun 25 08:11:35 UTC 2020


Am 24.06.20 um 22:18 schrieb Chris Wilson:
> Quoting Dave Airlie (2020-06-24 20:04:02)
>> On Wed, 24 Jun 2020 at 07:19, Chris Wilson <chris at chris-wilson.co.uk> wrote:
>>> Quoting Dave Airlie (2020-06-23 22:01:24)
>>>> On Tue, 23 Jun 2020 at 20:03, Chris Wilson <chris at chris-wilson.co.uk> wrote:
>>>>> Quoting Thomas Hellström (Intel) (2020-06-23 10:33:20)
>>>>>> Hi, Chris!
>>>>>>
>>>>>> On 6/22/20 11:59 AM, Chris Wilson wrote:
>>>>>>> In order to actually handle eviction and what not, we need to process
>>>>>>> all the objects together under a common lock, reservation_ww_class. As
>>>>>>> such, do a memory reservation pass after looking up the object/vma,
>>>>>>> which then feeds into the rest of execbuf [relocation, cmdparsing,
>>>>>>> flushing and ofc execution].
>>>>>>>
>>>>>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>>>>>> ---
>>>>>>>    .../gpu/drm/i915/gem/i915_gem_execbuffer.c    | 91 ++++++++++++++-----
>>>>>>>    1 file changed, 70 insertions(+), 21 deletions(-)
>>>>>>>
>>>>>> Which tree is this against? The series doesn't apply cleanly against
>>>>>> drm-tip?
>>>>> It's continuing on from the scheduler patches, the bug fixes and the
>>>>> iris-deferred-fence work. I thought throwing all of those old patches
>>>>> into the pile would have been distracting.
>>>>>
>>>>>> ...
>>>>>>
>>>>>>> +static int eb_reserve_mm(struct i915_execbuffer *eb)
>>>>>>> +{
>>>>>>> +     const u64 idx = eb->context->timeline->fence_context;
>>>>>>> +     struct ww_acquire_ctx acquire;
>>>>>>> +     struct eb_vma *ev;
>>>>>>> +     int err;
>>>>>>> +
>>>>>>> +     eb->mm_fence = __dma_fence_create_proxy(0, 0);
>>>>>>> +     if (!eb->mm_fence)
>>>>>>> +             return -ENOMEM;
>>>>>> Where are the proxy fence functions defined?
>>>>> In dma-fence-proxy.c ;)
>>>> The dma-fence-proxy that Christian NAKed before?
>>> I do not have an email from Christian about dma-fence-proxy in the last
>>> 3 years it has been on the list.
>> https://lore.kernel.org/dri-devel/aeb0373d-0583-d922-3b73-93668c27d177@amd.com/
> Darn, I skimmed the thread title and thought it was just about the
> timelines.
>
>> I'm assuming this was about patch 8 there which to me looks like proxy
>> fences but maybe by threading is off reading that.
> The deadlocks are easy to resolve. The fence is either signaled normally
> by userspace, they create a deadlock that is rejected by checking the dag
> and the fence signaled with an error (and work cancelled, error
> propagated back to userspace if they kept the output fence around), or
> userspace forgets entirely about the fence they were waiting on in which
> case it is signaled by closing the syncobjs [sadly not in error though,
> I hoping to report EPIPE] on process termination.

And exactly that concept is still a big NAK.

The kernel memory management depends on dma_fences to be signaling as 
soon as they are existing.

Just imagine what Daniel's dependency patches would splat out when you 
do something like this and correctly annotate the signaling code path.

Proxy fences, especially when they depend on userspace for signaling are 
an absolutely NO-GO.

Regards,
Christian.

>
> https://patchwork.freedesktop.org/patch/372759/?series=78762&rev=1
> We can always attach the dag resolver such that we resolve the deadlock
> for any importer and so only ever present a normal monotonic fence.
> That would make it illegal to wait on an external fence imported into
> that syncobj (as that would be outside of our dag). An option would
> be whether or not to force timeout slow userspace. But the simplicity of
> reusing the existing functionality to move intrabatch scheduling into
> iris is compelling. [In contrast, no one has yet finished the timeline
> patches to the point where they stopped throwing errors in igt, and we
> still then have to write patches for nonblocking wait-for-submit :[
>
> The use here is trivial, chiefly used as a convenience to flesh out this
> argument to see if we can reduce the lock duration within submission
> [from the entirety of submission to ideally just reservation] by holding
> a fence for the submission process itself. And that boils down to at what
> point can someone else start to wait on that fence, and whether or not we
> can avoid any direct/indirect waits ourselves after point and before
> completing submission. [Usual rules about not being allowed to wait on a
> resource while holding contendable resources, but with the nuance of
> what/when exactly that resource becomes contendable.] The lock contention
> is quite real, as at the moment it is devolving into a global lock. With
> the amusing side effect that it then turns out to be quicker to wrap the
> entire thing in struct_mutex.
> -Chris



More information about the Intel-gfx mailing list