[Intel-gfx] [PATCH 09/14] drm/i915/gem: Teach execbuf how to wait on future syncobj

Chris Wilson chris at chris-wilson.co.uk
Wed May 6 07:48:41 UTC 2020


Quoting Chris Wilson (2020-05-05 22:52:09)
> +bool i915_sched_node_verify_dag(struct i915_sched_node *waiter,
> +                               struct i915_sched_node *signaler)
> +{
> +       struct i915_dependency *dep, *p;
> +       struct i915_dependency stack;
> +       bool result = false;
> +       LIST_HEAD(dfs);
> +
> +       if (list_empty(&waiter->waiters_list))
> +               return true;
> +
> +       spin_lock_irq(&schedule_lock);
> +
> +       stack.signaler = signaler;
> +       list_add(&stack.dfs_link, &dfs);
> +
> +       list_for_each_entry(dep, &dfs, dfs_link) {
> +               struct i915_sched_node *node = dep->signaler;
> +
> +               if (node_signaled(node))
> +                       continue;
> +
> +               list_for_each_entry(p, &node->signalers_list, signal_link) {
> +                       if (p->signaler == waiter)
> +                               goto out;
> +
> +                       if (list_empty(&p->dfs_link))
> +                               list_add_tail(&p->dfs_link, &dfs);
> +               }
> +       }

Food for thought. With the timeline tracking we have a means to see the
latest sync points and then we only need to compare the edges between
timelines, rather than the whole graph.

We need to kill this global serialisation, not just here but for
rescheduling as well. But the only alternative to using dfs_link would
be a local temporary iterator, which has yet to appeal.

There must be a good way of doing concurrent iterative dag traversals
with no memory allocations...
-Chris


More information about the Intel-gfx mailing list