[Intel-gfx] [PATCH 09/31] drm/i915: Replace priolist rbtree with a skiplist
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Tue Feb 9 16:11:04 UTC 2021
On 08/02/2021 16:19, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2021-02-08 15:23:17)
>>
>> On 08/02/2021 10:52, Chris Wilson wrote:
>>> static struct list_head *
>>> lookup_priolist(struct i915_sched *se, int prio)
>>> {
>>> - struct i915_priolist *p;
>>> - struct rb_node **parent, *rb;
>>> - bool first = true;
>>> + struct i915_priolist *update[I915_PRIOLIST_HEIGHT];
>>> + struct i915_priolist_root *const root = &se->queue;
>>> + struct i915_priolist *pl, *tmp;
>>> + int lvl;
>>>
>>> lockdep_assert_held(&se->lock);
>>> - assert_priolists(se);
>>> -
>>> if (unlikely(se->no_priolist))
>>> prio = I915_PRIORITY_NORMAL;
>>>
>>> + for_each_priolist(pl, root) { /* recycle any empty elements before us */
>>> + if (pl->priority <= prio || !list_empty(&pl->requests))
>>> + break;
>>
>> This less part of the less-or-equal condition keeps confusing me as a
>> break criteria. If premise is cleaning up, why break on first smaller
>> prio? Would the idea be to prune all empty lists up to, not including,
>> the lookup prio?
>
> Just parcelling up the work. If we tidy up all the unused nodes before
> us, we insert ourselves at the head of the tree, and all the cheap
> checks to see if this is the first request, or to find the first
> request are happy.
>
> It's not expected to find anything unused with the tweaks to tidy up
> empty elements as we move between i915_priolist.requests, but it seems
> sensible to keep as then it should be just checking the first
> i915_priolist and breaking out.
It's fine, for some reason I missed the order is descending. Probably
thinking about deadlines already. Need to see how that works there then.
But a comment indicating the order would be cool.
>>> -void __i915_priolist_free(struct i915_priolist *p)
>>> +static void __remove_priolist(struct i915_sched *se, struct list_head *plist)
>>> {
>>> - kmem_cache_free(global.slab_priorities, p);
>>> + struct i915_priolist_root *root = &se->queue;
>>> + struct i915_priolist *pl, *tmp;
>>> + struct i915_priolist *old =
>>> + container_of(plist, struct i915_priolist, requests);
>>> + int prio = old->priority;
>>> + int lvl;
>>> +
>>> + lockdep_assert_held(&se->lock);
>>> + GEM_BUG_ON(!list_empty(plist));
>>> +
>>> + pl = &root->sentinel;
>>> + lvl = pl->level;
>>> + GEM_BUG_ON(lvl < 0);
>>> +
>>> + if (prio != I915_PRIORITY_NORMAL)
>>> + pl_push(old, &pl->requests);
>>> +
>>> + do {
>>> + while (tmp = pl->next[lvl], tmp->priority > prio)
>>> + pl = tmp;
Ah okay, this is needed because the list is singly linked. I suggest a
comment.
Doubly linked would not be interesting?
>>> + if (lvl <= old->level) {
>>> + pl->next[lvl] = old->next[lvl];
>>> + if (pl == &root->sentinel && old->next[lvl] == pl) {
>>> + GEM_BUG_ON(pl->level != lvl);
>>> + pl->level--;
>>> + }
>>> + }
>>> + } while (--lvl >= 0);
>>> + GEM_BUG_ON(tmp != old);
>>> +}
>
>>> +struct i915_priolist *__i915_sched_dequeue_next(struct i915_sched *se)
>>> +{
>>> + struct i915_priolist * const s = &se->queue.sentinel;
>>> + struct i915_priolist *pl = s->next[0];
>>> + int lvl;
>>> +
>>> + GEM_BUG_ON(!list_empty(&pl->requests));
>>> + GEM_BUG_ON(pl == s);
>>> +
>>> + /* Keep pl->next[0] valid for for_each_priolist iteration */
>>> + if (pl->priority != I915_PRIORITY_NORMAL)
>>> + pl_push(pl, &s->requests);
>>> +
>>> + lvl = pl->level;
>>> + GEM_BUG_ON(lvl < 0);
>>> + do {
>>> + s->next[lvl] = pl->next[lvl];
>>> + if (pl->next[lvl] == s) {
>>> + GEM_BUG_ON(s->level != lvl);
>>> + s->level--;
>>> + }
>>> + } while (--lvl >= 0);
>>> +
>>> + return pl->next[0];
>>> }
>>
>> If both __i915_sched_dequeue_next and __remove_priolist are removing an
>> empty list from the hieararchy, why can't they shared some code?
>
> The __remove_priolist does the general search and remove, whereas
> dequeue_next is trying to keep O(1) remove-from-head. dequeue_next is
> meant to be called many, many more times than __remove_priolist.
Ok.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list