[PATCH 4/6] drm/ttm: use ttm_bo_reserve_slowpath_nolru in ttm_eu_reserve_buffers
Jerome Glisse
j.glisse at gmail.com
Wed Dec 5 17:36:44 PST 2012
On Fri, Nov 30, 2012 at 7:12 AM, Maarten Lankhorst
<m.b.lankhorst at gmail.com> wrote:
> This requires re-use of the seqno, which increases fairness slightly.
> Instead of spinning with a new seqno every time we keep the current one,
> but still drop all other reservations we hold. Only when we succeed,
> we try to get back our other reservations again.
>
> This should increase fairness slightly as well.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com>
> ---
> drivers/gpu/drm/ttm/ttm_execbuf_util.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_execbuf_util.c b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> index c7d3236..c02b2b6 100644
> --- a/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_execbuf_util.c
> @@ -129,13 +129,17 @@ int ttm_eu_reserve_buffers(struct list_head *list)
> entry = list_first_entry(list, struct ttm_validate_buffer, head);
> glob = entry->bo->glob;
>
> -retry:
> spin_lock(&glob->lru_lock);
> val_seq = entry->bo->bdev->val_seq++;
>
> +retry:
After more thinking while i was going over patches to send comment i
think this one is bad, really bad. So here is bad case i see, we have
a list of 2 bo A & B. We succeed reserving A we set seq_valid to true
and val_seq to our val_seq. We fail on B, we go slow path and
unreserve A. Some other CPU reserve A but before it write either
seq_valid or val_seq we wakeup reserve B and retry to reserve A in
reserve_nolru we see A as reserved we enter the while loop see
seq_valid set and test val_seq against our same old val_seq we got ==
and return -EDEADLCK which is definitly not what we want to do. If we
inc val seq on retry we will never have this case. So what exactly not
incrementing it gave us ?
I think the read and write memory barrier i was talking in patch 1
should avoid any such things to happen but i need to sleep on that to
make nightmare about all things that can go wrong.
> list_for_each_entry(entry, list, head) {
> struct ttm_buffer_object *bo = entry->bo;
>
> + /* already slowpath reserved? */
> + if (entry->reserved)
> + continue;
> +
> ret = ttm_bo_reserve_nolru(bo, true, true, true, val_seq);
> switch (ret) {
> case 0:
> @@ -157,9 +161,15 @@ retry:
> ttm_eu_backoff_reservation_locked(list);
> spin_unlock(&glob->lru_lock);
> ttm_eu_list_ref_sub(list);
> - ret = ttm_bo_wait_unreserved(bo, true);
> + ret = ttm_bo_reserve_slowpath_nolru(bo, true, val_seq);
> if (unlikely(ret != 0))
> return ret;
> + spin_lock(&glob->lru_lock);
> + entry->reserved = true;
> + if (unlikely(atomic_read(&bo->cpu_writers) > 0)) {
> + ret = -EBUSY;
> + goto err;
> + }
> goto retry;
> default:
> goto err;
> --
> 1.8.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
More information about the dri-devel
mailing list