[Intel-gfx] [PATCH v9] drm/i915: Squash repeated awaits on the same fence

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Apr 27 16:47:32 UTC 2017


On 27/04/2017 12:48, Chris Wilson wrote:
> Track the latest fence waited upon on each context, and only add a new
> asynchronous wait if the new fence is more recent than the recorded
> fence for that context. This requires us to filter out unordered
> timelines, which are noted by DMA_FENCE_NO_CONTEXT. However, in the
> absence of a universal identifier, we have to use our own
> i915->mm.unordered_timeline token.
>
> v2: Throw around the debug crutches
> v3: Inline the likely case of the pre-allocation cache being full.
> v4: Drop the pre-allocation support, we can lose the most recent fence
> in case of allocation failure -- it just means we may emit more awaits
> than strictly necessary but will not break.
> v5: Trim allocation size for leaf nodes, they only need an array of u32
> not pointers.
> v6: Create mock_timeline to tidy selftest writing
> v7: s/intel_timeline_sync_get/intel_timeline_sync_is_later/ (Tvrtko)
> v8: Prune the stale sync points when we idle.
> v9: Include a small benchmark in the kselftests
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem.c                    |   1 +
>  drivers/gpu/drm/i915/i915_gem_request.c            |  11 +
>  drivers/gpu/drm/i915/i915_gem_timeline.c           | 314 +++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gem_timeline.h           |  15 +
>  drivers/gpu/drm/i915/selftests/i915_gem_timeline.c | 225 +++++++++++++++
>  .../gpu/drm/i915/selftests/i915_mock_selftests.h   |   1 +
>  drivers/gpu/drm/i915/selftests/mock_timeline.c     |  52 ++++
>  drivers/gpu/drm/i915/selftests/mock_timeline.h     |  33 +++
>  8 files changed, 652 insertions(+)
>  create mode 100644 drivers/gpu/drm/i915/selftests/i915_gem_timeline.c
>  create mode 100644 drivers/gpu/drm/i915/selftests/mock_timeline.c
>  create mode 100644 drivers/gpu/drm/i915/selftests/mock_timeline.h
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index c1fa3c103f38..f886ef492036 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -3214,6 +3214,7 @@ i915_gem_idle_work_handler(struct work_struct *work)
>  		intel_engine_disarm_breadcrumbs(engine);
>  		i915_gem_batch_pool_fini(&engine->batch_pool);
>  	}
> +	i915_gem_timelines_mark_idle(dev_priv);
>
>  	GEM_BUG_ON(!dev_priv->gt.awake);
>  	dev_priv->gt.awake = false;
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 5fa4e52ded06..d9f76665bc6b 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -772,6 +772,12 @@ i915_gem_request_await_dma_fence(struct drm_i915_gem_request *req,
>  		if (fence->context == req->fence.context)
>  			continue;
>
> +		/* Squash repeated waits to the same timelines */
> +		if (fence->context != req->i915->mm.unordered_timeline &&
> +		    intel_timeline_sync_is_later(req->timeline,
> +						 fence->context, fence->seqno))
> +			continue;

Wrong base?

> +
>  		if (dma_fence_is_i915(fence))
>  			ret = i915_gem_request_await_request(req,
>  							     to_request(fence));
> @@ -781,6 +787,11 @@ i915_gem_request_await_dma_fence(struct drm_i915_gem_request *req,
>  							    GFP_KERNEL);
>  		if (ret < 0)
>  			return ret;
> +
> +		/* Record the most latest fence on each timeline */
> +		if (fence->context != req->i915->mm.unordered_timeline)
> +			intel_timeline_sync_set(req->timeline,
> +						fence->context, fence->seqno);
>  	} while (--nchild);
>
>  	return 0;
> diff --git a/drivers/gpu/drm/i915/i915_gem_timeline.c b/drivers/gpu/drm/i915/i915_gem_timeline.c
> index b596ca7ee058..967c53a53a92 100644
> --- a/drivers/gpu/drm/i915/i915_gem_timeline.c
> +++ b/drivers/gpu/drm/i915/i915_gem_timeline.c
> @@ -24,6 +24,276 @@
>
>  #include "i915_drv.h"
>
> +#define NSYNC 16
> +#define SHIFT ilog2(NSYNC)
> +#define MASK (NSYNC - 1)
> +
> +/* struct intel_timeline_sync is a layer of a radixtree that maps a u64 fence
> + * context id to the last u32 fence seqno waited upon from that context.
> + * Unlike lib/radixtree it uses a parent pointer that allows traversal back to
> + * the root. This allows us to access the whole tree via a single pointer
> + * to the most recently used layer. We expect fence contexts to be dense
> + * and most reuse to be on the same i915_gem_context but on neighbouring
> + * engines (i.e. on adjacent contexts) and reuse the same leaf, a very
> + * effective lookup cache. If the new lookup is not on the same leaf, we
> + * expect it to be on the neighbouring branch.
> + *
> + * A leaf holds an array of u32 seqno, and has height 0. The bitmap field
> + * allows us to store whether a particular seqno is valid (i.e. allows us
> + * to distinguish unset from 0).
> + *
> + * A branch holds an array of layer pointers, and has height > 0, and always
> + * has at least 2 layers (either branches or leaves) below it.
> + */
> +struct intel_timeline_sync {
> +	u64 prefix;
> +	unsigned int height;
> +	unsigned int bitmap;

u16 would be enough for the bitmap since NSYNC == 16? To no benefit 
though. Maybe just add a BUILD_BUG_ON(sizeof(p->bitmap) * BITS_PER_BYTE 
 >= NSYNC) somewhere?

> +	struct intel_timeline_sync *parent;
> +	/* union {
> +	 *	u32 seqno;
> +	 *	struct intel_timeline_sync *child;
> +	 * } slot[NSYNC];
> +	 */

Put a note saying this comment describes what follows after struct 
intel_timeline_sync.

Would "union { ... } slot[0];" work as a maker and have any benefit to 
the readability of the code below?

You could same some bytes (64 I think) for the leaf nodes if you did 
something like:

	union {
		u32 seqno[NSYNC];
		struct intel_timeline_sync *child[NSYNC];
	};

Although I think it conflicts with the slot marker idea. Hm, no actually 
it doesn't. You could have both union members as simply markers.

	union {
		u32 seqno[];
		struct intel_timeline_sync *child[];
	};

Again, not sure yet if it would make that much better readability.

> +};
> +
> +static inline u32 *__sync_seqno(struct intel_timeline_sync *p)
> +{
> +	GEM_BUG_ON(p->height);
> +	return (u32 *)(p + 1);
> +}
> +
> +static inline struct intel_timeline_sync **
> +__sync_child(struct intel_timeline_sync *p)
> +{
> +	GEM_BUG_ON(!p->height);
> +	return (struct intel_timeline_sync **)(p + 1);
> +}
> +
> +static inline unsigned int
> +__sync_idx(const struct intel_timeline_sync *p, u64 id)
> +{
> +	return (id >> p->height) & MASK;
> +}
> +
> +static void __sync_free(struct intel_timeline_sync *p)
> +{
> +	if (p->height) {
> +		unsigned int i;
> +
> +		while ((i = ffs(p->bitmap))) {
> +			p->bitmap &= ~0u << i;
> +			__sync_free(__sync_child(p)[i - 1]);

Maximum height is 64 for this tree so here there is no danger of stack 
overflow?

> +		}
> +	}
> +
> +	kfree(p);
> +}
> +
> +static void sync_free(struct intel_timeline_sync *sync)
> +{
> +	if (!sync)
> +		return;
> +
> +	while (sync->parent)
> +		sync = sync->parent;
> +
> +	__sync_free(sync);
> +}
> +
> +/** intel_timeline_sync_is_later -- compare against the last know sync point
> + * @tl - the @intel_timeline
> + * @id - the context id (other timeline) we are synchronising to
> + * @seqno - the sequence number along the other timeline
> + *
> + * If we have already synchronised this @tl with another (@id) then we can
> + * omit any repeated or earlier synchronisation requests. If the two timelines
> + * are already coupled, we can also omit the dependency between the two as that
> + * is already known via the timeline.
> + *
> + * Returns true if the two timelines are already synchronised wrt to @seqno,
> + * false if not and the synchronisation must be emitted.
> + */
> +bool intel_timeline_sync_is_later(struct intel_timeline *tl, u64 id, u32 seqno)
> +{
> +	struct intel_timeline_sync *p;
> +	unsigned int idx;
> +
> +	p = tl->sync;
> +	if (!p)
> +		return false;
> +
> +	if (likely((id >> SHIFT) == p->prefix))
> +		goto found;
> +
> +	/* First climb the tree back to a parent branch */
> +	do {
> +		p = p->parent;
> +		if (!p)
> +			return false;
> +
> +		if ((id >> p->height >> SHIFT) == p->prefix)

Worth having "id >> p->height >> SHIFT" as a macro for better readability?

> +			break;
> +	} while (1);
> +
> +	/* And then descend again until we find our leaf */
> +	do {
> +		if (!p->height)
> +			break;
> +
> +		p = __sync_child(p)[__sync_idx(p, id)];
> +		if (!p)
> +			return false;
> +
> +		if ((id >> p->height >> SHIFT) != p->prefix)
> +			return false;

Is this possible or a GEM_BUG_ON? Maybe I am not understanding it, but I 
thought it would be __sync_child slot had unexpected prefix in it?

> +	} while (1);
> +
> +	tl->sync = p;
> +found:
> +	idx = id & MASK;
> +	if (!(p->bitmap & BIT(idx)))
> +		return false;
> +
> +	return i915_seqno_passed(__sync_seqno(p)[idx], seqno);
> +}
> +
> +static noinline int
> +__intel_timeline_sync_set(struct intel_timeline *tl, u64 id, u32 seqno)
> +{
> +	struct intel_timeline_sync *p = tl->sync;
> +	unsigned int idx;
> +
> +	if (!p) {
> +		p = kzalloc(sizeof(*p) + NSYNC * sizeof(seqno), GFP_KERNEL);
> +		if (unlikely(!p))
> +			return -ENOMEM;
> +
> +		p->prefix = id >> SHIFT;
> +		goto found;
> +	}
> +
> +	/* Climb back up the tree until we find a common prefix */
> +	do {
> +		if (!p->parent)
> +			break;
> +
> +		p = p->parent;
> +
> +		if ((id >> p->height >> SHIFT) == p->prefix)
> +			break;
> +	} while (1);

__climb_back_to_prefix(p, id) as a helper since it is used in the lookup 
as well?

> +
> +	/* No shortcut, we have to descend the tree to find the right layer
> +	 * containing this fence.
> +	 *
> +	 * Each layer in the tree holds 16 (NSYNC) pointers, either fences
> +	 * or lower layers. Leaf nodes (height = 0) contain the fences, all
> +	 * other nodes (height > 0) are internal layers that point to a lower
> +	 * node. Each internal layer has at least 2 descendents.
> +	 *
> +	 * Starting at the top, we check whether the current prefix matches. If
> +	 * it doesn't, we have gone passed our layer and need to insert a join
> +	 * into the tree, and a new leaf node as a descendent as well as the
> +	 * original layer.
> +	 *
> +	 * The matching prefix means we are still following the right branch
> +	 * of the tree. If it has height 0, we have found our leaf and just
> +	 * need to replace the fence slot with ourselves. If the height is
> +	 * not zero, our slot contains the next layer in the tree (unless
> +	 * it is empty, in which case we can add ourselves as a new leaf).
> +	 * As descend the tree the prefix grows (and height decreases).
> +	 */
> +	do {
> +		struct intel_timeline_sync *next;
> +
> +		if ((id >> p->height >> SHIFT) != p->prefix) {
> +			/* insert a join above the current layer */
> +			next = kzalloc(sizeof(*next) + NSYNC * sizeof(next),
> +				       GFP_KERNEL);
> +			if (unlikely(!next))
> +				return -ENOMEM;
> +
> +			next->height = ALIGN(fls64((id >> p->height >> SHIFT) ^ p->prefix),
> +					    SHIFT) + p->height;

Got lost here - what's xor-ing accomplishing here? What is height then, 
not just depth relative to the bottom of the tree?

> +			next->prefix = id >> next->height >> SHIFT;
> +
> +			if (p->parent)
> +				__sync_child(p->parent)[__sync_idx(p->parent, id)] = next;
> +			next->parent = p->parent;
> +
> +			idx = p->prefix >> (next->height - p->height - SHIFT) & MASK;
> +			__sync_child(next)[idx] = p;
> +			next->bitmap |= BIT(idx);
> +			p->parent = next;
> +
> +			/* ascend to the join */
> +			p = next;
> +		} else {
> +			if (!p->height)
> +				break;
> +		}
> +
> +		/* descend into the next layer */
> +		GEM_BUG_ON(!p->height);
> +		idx = __sync_idx(p, id);
> +		next = __sync_child(p)[idx];
> +		if (unlikely(!next)) {
> +			next = kzalloc(sizeof(*next) + NSYNC * sizeof(seqno),
> +				       GFP_KERNEL);
> +			if (unlikely(!next))
> +				return -ENOMEM;
> +
> +			__sync_child(p)[idx] = next;
> +			p->bitmap |= BIT(idx);
> +			next->parent = p;
> +			next->prefix = id >> SHIFT;
> +
> +			p = next;
> +			break;
> +		}
> +
> +		p = next;
> +	} while (1);
> +
> +found:
> +	GEM_BUG_ON(p->height);
> +	GEM_BUG_ON(p->prefix != id >> SHIFT);
> +	tl->sync = p;
> +	idx = id & MASK;
> +	__sync_seqno(p)[idx] = seqno;
> +	p->bitmap |= BIT(idx);
> +	return 0;
> +}
> +
> +/** intel_timeline_sync_set -- mark the most recent syncpoint between contexts
> + * @tl - the @intel_timeline
> + * @id - the context id (other timeline) we have synchronised to
> + * @seqno - the sequence number along the other timeline
> + *
> + * When we synchronise this @tl with another (@id), we also know that we have
> + * synchronized with all previous seqno along that timeline. If we then have
> + * a request to synchronise with the same seqno or older, we can omit it,
> + * see intel_timeline_sync_is_later()
> + */
> +int intel_timeline_sync_set(struct intel_timeline *tl, u64 id, u32 seqno)
> +{
> +	struct intel_timeline_sync *p = tl->sync;
> +
> +	/* We expect to be called in sequence following a  _get(id), which
> +	 * should have preloaded the tl->sync hint for us.
> +	 */
> +	if (likely(p && (id >> SHIFT) == p->prefix)) {
> +		unsigned int idx = id & MASK;
> +
> +		__sync_seqno(p)[idx] = seqno;
> +		p->bitmap |= BIT(idx);
> +		return 0;
> +	}
> +
> +	return __intel_timeline_sync_set(tl, id, seqno);

Could pass in p and set tl->sync = p at this level. That would decouple 
the algorithm from the timeline better. With equivalent treatment for 
the query, and renaming of struct intel_timeline_sync, algorithm would 
be ready for moving out of drm/i915/ :)

> +}
> +
>  static int __i915_gem_timeline_init(struct drm_i915_private *i915,
>  				    struct i915_gem_timeline *timeline,
>  				    const char *name,
> @@ -35,6 +305,12 @@ static int __i915_gem_timeline_init(struct drm_i915_private *i915,
>
>  	lockdep_assert_held(&i915->drm.struct_mutex);
>
> +	/* Ideally we want a set of engines on a single leaf as we expect
> +	 * to mostly be tracking synchronisation between engines.
> +	 */
> +	BUILD_BUG_ON(NSYNC < I915_NUM_ENGINES);
> +	BUILD_BUG_ON(NSYNC > BITS_PER_BYTE * sizeof(timeline->engine[0].sync->bitmap));

Ta-da! :)

> +
>  	timeline->i915 = i915;
>  	timeline->name = kstrdup(name ?: "[kernel]", GFP_KERNEL);
>  	if (!timeline->name)
> @@ -81,6 +357,37 @@ int i915_gem_timeline_init__global(struct drm_i915_private *i915)
>  					&class, "&global_timeline->lock");
>  }
>
> +/** i915_gem_timelines_mark_idle -- called when the driver idles
> + * @i915 - the drm_i915_private device
> + *
> + * When the driver is completely idle, we know that all of our sync points
> + * have been signaled and our tracking is then entirely redundant. Any request
> + * to wait upon an older sync point will be completed instantly as we know
> + * the fence is signaled and therefore we will not even look them up in the
> + * sync point map.
> + */
> +void i915_gem_timelines_mark_idle(struct drm_i915_private *i915)
> +{
> +	struct i915_gem_timeline *timeline;
> +	int i;
> +
> +	lockdep_assert_held(&i915->drm.struct_mutex);
> +
> +	list_for_each_entry(timeline, &i915->gt.timelines, link) {
> +		for (i = 0; i < ARRAY_SIZE(timeline->engine); i++) {
> +			struct intel_timeline *tl = &timeline->engine[i];
> +
> +			/* All known fences are completed so we can scrap
> +			 * the current sync point tracking and start afresh,
> +			 * any attempt to wait upon a previous sync point
> +			 * will be skipped as the fence was signaled.
> +			 */
> +			sync_free(tl->sync);
> +			tl->sync = NULL;
> +		}
> +	}
> +}
> +
>  void i915_gem_timeline_fini(struct i915_gem_timeline *timeline)
>  {
>  	int i;
> @@ -91,8 +398,15 @@ void i915_gem_timeline_fini(struct i915_gem_timeline *timeline)
>  		struct intel_timeline *tl = &timeline->engine[i];
>
>  		GEM_BUG_ON(!list_empty(&tl->requests));
> +
> +		sync_free(tl->sync);
>  	}
>
>  	list_del(&timeline->link);
>  	kfree(timeline->name);
>  }
> +
> +#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
> +#include "selftests/mock_timeline.c"
> +#include "selftests/i915_gem_timeline.c"
> +#endif
> diff --git a/drivers/gpu/drm/i915/i915_gem_timeline.h b/drivers/gpu/drm/i915/i915_gem_timeline.h
> index 6c53e14cab2a..e16a62bc21e6 100644
> --- a/drivers/gpu/drm/i915/i915_gem_timeline.h
> +++ b/drivers/gpu/drm/i915/i915_gem_timeline.h
> @@ -26,10 +26,13 @@
>  #define I915_GEM_TIMELINE_H
>
>  #include <linux/list.h>
> +#include <linux/radix-tree.h>

What is used from it?

>
> +#include "i915_utils.h"
>  #include "i915_gem_request.h"
>
>  struct i915_gem_timeline;
> +struct intel_timeline_sync;
>
>  struct intel_timeline {
>  	u64 fence_context;
> @@ -55,6 +58,14 @@ struct intel_timeline {
>  	 * struct_mutex.
>  	 */
>  	struct i915_gem_active last_request;
> +
> +	/* We track the most recent seqno that we wait on in every context so
> +	 * that we only have to emit a new await and dependency on a more
> +	 * recent sync point. As the contexts may executed out-of-order, we
> +	 * have to track each individually and cannot not rely on an absolute
> +	 * global_seqno.
> +	 */
> +	struct intel_timeline_sync *sync;
>  	u32 sync_seqno[I915_NUM_ENGINES];
>
>  	struct i915_gem_timeline *common;
> @@ -73,6 +84,10 @@ int i915_gem_timeline_init(struct drm_i915_private *i915,
>  			   struct i915_gem_timeline *tl,
>  			   const char *name);
>  int i915_gem_timeline_init__global(struct drm_i915_private *i915);
> +void i915_gem_timelines_mark_idle(struct drm_i915_private *i915);
>  void i915_gem_timeline_fini(struct i915_gem_timeline *tl);
>
> +int intel_timeline_sync_set(struct intel_timeline *tl, u64 id, u32 seqno);
> +bool intel_timeline_sync_is_later(struct intel_timeline *tl, u64 id, u32 seqno);
> +
>  #endif
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_timeline.c b/drivers/gpu/drm/i915/selftests/i915_gem_timeline.c
> new file mode 100644
> index 000000000000..66b4c24b0c26
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_timeline.c
> @@ -0,0 +1,225 @@
> +/*
> + * Copyright © 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include <linux/random.h>
> +
> +#include "../i915_selftest.h"
> +#include "mock_gem_device.h"
> +#include "mock_timeline.h"
> +
> +static int igt_sync(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	const struct {
> +		const char *name;
> +		u32 seqno;
> +		bool expected;
> +		bool set;
> +	} pass[] = {
> +		{ "unset", 0, false, false },
> +		{ "new", 0, false, true },
> +		{ "0a", 0, true, true },
> +		{ "1a", 1, false, true },
> +		{ "1b", 1, true, true },
> +		{ "0b", 0, true, false },
> +		{ "2a", 2, false, true },
> +		{ "4", 4, false, true },
> +		{ "INT_MAX", INT_MAX, false, true },
> +		{ "INT_MAX-1", INT_MAX-1, true, false },
> +		{ "INT_MAX+1", (u32)INT_MAX+1, false, true },
> +		{ "INT_MAX", INT_MAX, true, false },
> +		{ "UINT_MAX", UINT_MAX, false, true },
> +		{ "wrap", 0, false, true },
> +		{ "unwrap", UINT_MAX, true, false },
> +		{},
> +	}, *p;
> +	struct i915_gem_timeline *timeline;
> +	struct intel_timeline *tl;
> +	int order, offset;
> +	int ret;
> +
> +	timeline = mock_timeline(i915);

Hey-ho.. when I suggested mock_timeline I did not realize we need the 
mock_device anyway. :( For struct_mutex I guess? Oh well, that was an 
useless suggestion.. :(

> +	if (!timeline)
> +		return -ENOMEM;
> +
> +	tl = &timeline->engine[RCS];
> +	for (p = pass; p->name; p++) {
> +		for (order = 1; order < 64; order++) {
> +			for (offset = -1; offset <= (order > 1); offset++) {
> +				u64 ctx = BIT_ULL(order) + offset;
> +
> +				if (intel_timeline_sync_is_later
> +				    (tl, ctx, p->seqno) != p->expected) {

Unusual formatting.

> +					pr_err("1: %s(ctx=%llu, seqno=%u) expected passed %s but failed\n",
> +					       p->name, ctx, p->seqno, yesno(p->expected));
> +					ret = -EINVAL;
> +					goto out;
> +				}
> +
> +				if (p->set) {
> +					ret = intel_timeline_sync_set(tl, ctx, p->seqno);
> +					if (ret)
> +						goto out;
> +				}
> +			}
> +		}
> +	}
> +
> +	tl = &timeline->engine[BCS];
> +	for (order = 1; order < 64; order++) {
> +		for (offset = -1; offset <= (order > 1); offset++) {
> +			u64 ctx = BIT_ULL(order) + offset;
> +
> +			for (p = pass; p->name; p++) {
> +				if (intel_timeline_sync_is_later
> +				    (tl, ctx, p->seqno) != p->expected) {
> +					pr_err("2: %s(ctx=%llu, seqno=%u) expected passed %s but failed\n",
> +					       p->name, ctx, p->seqno, yesno(p->expected));
> +					ret = -EINVAL;
> +					goto out;
> +				}
> +
> +				if (p->set) {
> +					ret = intel_timeline_sync_set(tl, ctx, p->seqno);
> +					if (ret)
> +						goto out;
> +				}
> +			}
> +		}
> +	}
> +
> +out:
> +	mock_timeline_destroy(timeline);
> +	return ret;
> +}
> +
> +static u64 prandom_u64_state(struct rnd_state *rnd)
> +{
> +	u64 x;
> +
> +	x = prandom_u32_state(rnd);
> +	x <<= 32;
> +	x |= prandom_u32_state(rnd);
> +
> +	return x;
> +}
> +
> +static unsigned int random_engine(struct rnd_state *rnd)
> +{
> +	return ((u64)prandom_u32_state(rnd) * I915_NUM_ENGINES) >> 32;
> +}
> +
> +static int bench_sync(void *arg)
> +{
> +	struct drm_i915_private *i915 = arg;
> +	struct rnd_state prng;
> +	struct i915_gem_timeline *timeline;
> +	struct intel_timeline *tl;
> +	unsigned long end_time, count;
> +	ktime_t kt;
> +	int ret;
> +
> +	timeline = mock_timeline(i915);
> +	if (!timeline)
> +		return -ENOMEM;
> +
> +	prandom_seed_state(&prng, i915_selftest.random_seed);
> +	tl = &timeline->engine[RCS];
> +
> +	count = 0;
> +	kt = -ktime_get();
> +	end_time = jiffies + HZ/10;
> +	do {
> +		u64 id = prandom_u64_state(&prng);
> +
> +		intel_timeline_sync_set(tl, id, 0);
> +		count++;
> +	} while (!time_after(jiffies, end_time));
> +	kt = ktime_add(ktime_get(), kt);

Why not ktime_sub? I don't know if ktime_t is signed or not.

> +
> +	pr_info("%s: %lu random insertions, %lluns/insert\n",
> +		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
> +
> +	prandom_seed_state(&prng, i915_selftest.random_seed);
> +
> +	end_time = count;
> +	kt = -ktime_get();
> +	while (end_time--) {

This is a new pattern for me - why not simply go by time in every test? 
You have to be sure lookups are not a bazillion times faster than 
insertions like this.

> +		u64 id = prandom_u64_state(&prng);
> +
> +		if (!intel_timeline_sync_is_later(tl, id, 0)) {
> +			pr_err("Lookup of %llu failed\n", id);
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +	}
> +	kt = ktime_add(ktime_get(), kt);
> +
> +	pr_info("%s: %lu random lookups, %lluns/lookup\n",
> +		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
> +
> +	prandom_seed_state(&prng, i915_selftest.random_seed);
> +	tl = &timeline->engine[BCS];
> +
> +	count = 0;
> +	kt = -ktime_get();
> +	end_time = jiffies + HZ/10;
> +	do {
> +		u32 id = random_engine(&prng);
> +		u32 seqno = prandom_u32_state(&prng);
> +
> +		if (!intel_timeline_sync_is_later(tl, id, seqno))
> +			intel_timeline_sync_set(tl, id, seqno);
> +
> +		count++;
> +	} while (!time_after(jiffies, end_time));
> +	kt = ktime_add(ktime_get(), kt);
> +
> +	pr_info("%s: %lu repeated insert/lookups, %lluns/op\n",
> +		__func__, count, (long long)div64_ul(ktime_to_ns(kt), count));
> +
> +	ret = 0;
> +out:
> +	mock_timeline_destroy(timeline);
> +	return ret;
> +}
> +
> +int i915_gem_timeline_mock_selftests(void)
> +{
> +	static const struct i915_subtest tests[] = {
> +		SUBTEST(igt_sync),
> +		SUBTEST(bench_sync),
> +	};
> +	struct drm_i915_private *i915;
> +	int err;
> +
> +	i915 = mock_gem_device();
> +	if (!i915)
> +		return -ENOMEM;
> +
> +	err = i915_subtests(tests, i915);
> +	drm_dev_unref(&i915->drm);
> +
> +	return err;
> +}
> diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> index be9a9ebf5692..8d0f50c25df8 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> +++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
> @@ -12,6 +12,7 @@ selftest(sanitycheck, i915_mock_sanitycheck) /* keep first (igt selfcheck) */
>  selftest(scatterlist, scatterlist_mock_selftests)
>  selftest(uncore, intel_uncore_mock_selftests)
>  selftest(breadcrumbs, intel_breadcrumbs_mock_selftests)
> +selftest(timelines, i915_gem_timeline_mock_selftests)
>  selftest(requests, i915_gem_request_mock_selftests)
>  selftest(objects, i915_gem_object_mock_selftests)
>  selftest(dmabuf, i915_gem_dmabuf_mock_selftests)
> diff --git a/drivers/gpu/drm/i915/selftests/mock_timeline.c b/drivers/gpu/drm/i915/selftests/mock_timeline.c
> new file mode 100644
> index 000000000000..e8d62f5f6ed3
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/mock_timeline.c
> @@ -0,0 +1,52 @@
> +/*
> + * Copyright © 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#include "mock_timeline.h"
> +
> +struct i915_gem_timeline *
> +mock_timeline(struct drm_i915_private *i915)
> +{
> +	struct i915_gem_timeline *timeline;
> +
> +	timeline = kzalloc(sizeof(*timeline), GFP_KERNEL);
> +	if (!timeline)
> +		return NULL;
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	i915_gem_timeline_init(i915, timeline, "mock");
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	return timeline;
> +}
> +
> +void mock_timeline_destroy(struct i915_gem_timeline *timeline)
> +{
> +	struct drm_i915_private *i915 = timeline->i915;
> +
> +	mutex_lock(&i915->drm.struct_mutex);
> +	i915_gem_timeline_fini(timeline);
> +	mutex_unlock(&i915->drm.struct_mutex);
> +
> +	kfree(timeline);
> +}
> diff --git a/drivers/gpu/drm/i915/selftests/mock_timeline.h b/drivers/gpu/drm/i915/selftests/mock_timeline.h
> new file mode 100644
> index 000000000000..b33dcd2151ef
> --- /dev/null
> +++ b/drivers/gpu/drm/i915/selftests/mock_timeline.h
> @@ -0,0 +1,33 @@
> +/*
> + * Copyright © 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __MOCK_TIMELINE__
> +#define __MOCK_TIMELINE__
> +
> +#include "../i915_gem_timeline.h"
> +
> +struct i915_gem_timeline *mock_timeline(struct drm_i915_private *i915);
> +void mock_timeline_destroy(struct i915_gem_timeline *timeline);
> +
> +#endif /* !__MOCK_TIMELINE__ */
>

I'll have another pass tomorrow. Hopefully with some helpful replies on 
my question I will be able to digest it.

Regards,

Tvrtko




More information about the Intel-gfx mailing list