[Intel-gfx] [PATCH 07/27] drm/i915: Squash repeated awaits on the same fence
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Wed Apr 26 10:54:08 UTC 2017
On 26/04/2017 11:38, Chris Wilson wrote:
> On Wed, Apr 26, 2017 at 11:20:16AM +0100, Tvrtko Ursulin wrote:
>>
>> On 19/04/2017 10:41, Chris Wilson wrote:
>>> Track the latest fence waited upon on each context, and only add a new
>>> asynchronous wait if the new fence is more recent than the recorded
>>> fence for that context. This requires us to filter out unordered
>>> timelines, which are noted by DMA_FENCE_NO_CONTEXT. However, in the
>>> absence of a universal identifier, we have to use our own
>>> i915->mm.unordered_timeline token.
>>>
>>> v2: Throw around the debug crutches
>>> v3: Inline the likely case of the pre-allocation cache being full.
>>> v4: Drop the pre-allocation support, we can lose the most recent fence
>>> in case of allocation failure -- it just means we may emit more awaits
>>> than strictly necessary but will not break.
>>> v5: Trim allocation size for leaf nodes, they only need an array of u32
>>> not pointers.
>>>
>>> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
>>> ---
>>> drivers/gpu/drm/i915/i915_gem_request.c | 67 +++---
>>> drivers/gpu/drm/i915/i915_gem_timeline.c | 260 +++++++++++++++++++++
>>> drivers/gpu/drm/i915/i915_gem_timeline.h | 14 ++
>>> drivers/gpu/drm/i915/selftests/i915_gem_timeline.c | 123 ++++++++++
>>> .../gpu/drm/i915/selftests/i915_mock_selftests.h | 1 +
>>> 5 files changed, 438 insertions(+), 27 deletions(-)
>>> create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_timeline.c
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
>>> index 97c07986b7c1..fb6c31ba3ef9 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
>>> @@ -730,9 +730,7 @@ int
>>> i915_gem_request_await_dma_fence(struct drm_i915_gem_request *req,
>>> struct dma_fence *fence)
>>> {
>>> - struct dma_fence_array *array;
>>> int ret;
>>> - int i;
>>>
>>> if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> return 0;
>>> @@ -744,39 +742,54 @@ i915_gem_request_await_dma_fence(struct drm_i915_gem_request *req,
>>> if (fence->context == req->fence.context)
>>> return 0;
>>>
>>> - if (dma_fence_is_i915(fence))
>>> - return i915_gem_request_await_request(req, to_request(fence));
>>> + /* Squash repeated waits to the same timelines, picking the latest */
>>> + if (fence->context != req->i915->mm.unordered_timeline &&
>>> + intel_timeline_sync_get(req->timeline,
>>> + fence->context, fence->seqno))
>>
>> Function name is non-intuitive to me. It doesn't seem to get
>> anything, but is more like query? Since it ends up with
>> i915_seqno_passed, maybe intel_timeline_sync_is_newer/older ? (give
>> or take)
>
> _get was choosen as the partner for _set, which seemed to make sense.
> Keep intel_timeline_sync_set() and replace _get with
> intel_timeline_sync_passed() ?
> intel_timeline_sync_is_later() ?
Both are better in my opinion. _get just makes it sounds like it is
returning something from the object, which it is not. So whichever you
prefer.
>>> diff --git a/drivers/gpu/drm/i915/i915_gem_timeline.c b/drivers/gpu/drm/i915/i915_gem_timeline.c
>>> index b596ca7ee058..f2b734dda895 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem_timeline.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem_timeline.c
>>> @@ -24,6 +24,254 @@
>>>
>>> #include "i915_drv.h"
>>>
>>> +#define NSYNC 16
>>> +#define SHIFT ilog2(NSYNC)
>>> +#define MASK (NSYNC - 1)
>>> +
>>> +/* struct intel_timeline_sync is a layer of a radixtree that maps a u64 fence
>>> + * context id to the last u32 fence seqno waited upon from that context.
>>> + * Unlike lib/radixtree it uses a parent pointer that allows traversal back to
>>> + * the root. This allows us to access the whole tree via a single pointer
>>> + * to the most recently used layer. We expect fence contexts to be dense
>>> + * and most reuse to be on the same i915_gem_context but on neighbouring
>>> + * engines (i.e. on adjacent contexts) and reuse the same leaf, a very
>>> + * effective lookup cache. If the new lookup is not on the same leaf, we
>>> + * expect it to be on the neighbouring branch.
>>> + *
>>> + * A leaf holds an array of u32 seqno, and has height 0. The bitmap field
>>> + * allows us to store whether a particular seqno is valid (i.e. allows us
>>> + * to distinguish unset from 0).
>>> + *
>>> + * A branch holds an array of layer pointers, and has height > 0, and always
>>> + * has at least 2 layers (either branches or leaves) below it.
>>> + *
>>> + */
>>
>> @_@ :)
>>
>> Ok, so a map of u64 to u32. We can't use IDR or radixtree directly
>> because of u64 keys. :( How about a hash table? It would be much
>> simpler to review. :) Seriously, if it would perform close enough it
>> would be a much much simpler implementation.
>
> You want a resizable hashtable. rht is appallingly slow, so you want a
> custom resizeable ht. They are not as simple as this codewise ;)
> (Plus a compressed radixtree is part of my plan for scalability
> improvements for struct reservation_object.)
Why resizable? I was thinking a normal one. If at any given time we have
an active set of contexts, or at least lookups are as you say below, to
neighbouring contexts, that would mean we are talking about lookups to
different hash buckets. And for the typical working set we would expect
many collisions so longer lists in each bucket? So maybe NUM_ENGINES *
some typical load constant number buckets would not be that bad?
> This is designed around the idea that most lookups are to neighbouring
> contexts (i.e. same i915_gem_context, different engines) and so are on
> the same leaf and so cached. (A goal here is to be cheaper than the cost
> of repetitions along the fence signaling. They are indirect costs that
> show up in a couple of places, but are reasonably cheap. Offsetting the
> cost is the benefit of moving it off the signal->exec path.)
>
> Plus radixtree scrapped the idr lookup cache, which is a negative for
> most of our code :( Fortunately for execbuf, we do have a bypass planned.
I trust the data structure is a great, but would like to understand if
something simpler could perhaps get us 99% of the performance (or some
number).
>>> +static int igt_seqmap(void *arg)
>>> +{
>>> + struct drm_i915_private *i915 = arg;
>>> + const struct {
>>> + const char *name;
>>> + u32 seqno;
>>> + bool expected;
>>> + bool set;
>>> + } pass[] = {
>>> + { "unset", 0, false, false },
>>> + { "new", 0, false, true },
>>> + { "0a", 0, true, true },
>>> + { "1a", 1, false, true },
>>> + { "1b", 1, true, true },
>>> + { "0b", 0, true, false },
>>> + { "2a", 2, false, true },
>>> + { "4", 4, false, true },
>>> + { "INT_MAX", INT_MAX, false, true },
>>> + { "INT_MAX-1", INT_MAX-1, true, false },
>>> + { "INT_MAX+1", (u32)INT_MAX+1, false, true },
>>> + { "INT_MAX", INT_MAX, true, false },
>>> + { "UINT_MAX", UINT_MAX, false, true },
>>> + { "wrap", 0, false, true },
>>> + { "unwrap", UINT_MAX, true, false },
>>> + {},
>>> + }, *p;
>>> + struct intel_timeline *tl;
>>> + int order, offset;
>>> + int ret;
>>> +
>>> + tl = &i915->gt.global_timeline.engine[RCS];
>>
>> Unless I am missing something, it looks like you could get away with
>> a lighter solution of implementing a mock_timeline instead of the
>> whole mock_gem_device. I think it would be preferable.
>
> Fine, I was just using a familiar pattern.
If it is more than a few lines (I thought it wouldn't be) to add a mock
timeline then don't bother. I just thought unit tests should preferable
stay as lean as possible.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list