[PATCH 4/4] dma-buf: nuke reservation_object seq number

Chris Wilson chris at chris-wilson.co.uk
Wed Aug 14 17:22:53 UTC 2019


Quoting Chris Wilson (2019-08-14 18:06:18)
> Quoting Chris Wilson (2019-08-14 17:42:48)
> > Quoting Daniel Vetter (2019-08-14 16:39:08)
> > > > > > +       } while (rcu_access_pointer(obj->fence_excl) != *excl);
> > > 
> > > What if someone is real fast (like really real fast) and recycles the
> > > exclusive fence so you read the same pointer twice, but everything else
> > > changed? reused fence pointer is a lot more likely than seqlock wrapping
> > > around.
> > 
> > It's an exclusive fence. If it is replaced, it must be later than all
> > the shared fences (and dependent on them directly or indirectly), and
> > so still a consistent snapshot.
> 
> An extension of that argument says we don't even need to loop,
> 
> *list = rcu_dereference(obj->fence);
> *shared_count = *list ? (*list)->shared_count : 0;
> smp_rmb();
> *excl = rcu_dereference(obj->fence_excl);
> 
> Gives a consistent snapshot. It doesn't matter if the fence_excl is
> before or after the shared_list -- if it's after, it's a superset of all
> fences, if it's before, we have a correct list of shared fences the
> earlier fence_excl.

The problem is that the point of the loop is that we do need a check on
the fences after the full memory barrier.

Thinking of the rationale beaten out for dma_fence_get_excl_rcu_safe()

We don't have a full memory barrier here, so this cannot be used safely
in light of fence reallocation.
-Chris


More information about the dri-devel mailing list