[PATCH] dma-buf: fix reservation_object_wait_timeout_rcu to wait correctly v2
Deucher, Alexander
Alexander.Deucher at amd.com
Mon Jul 31 15:39:29 UTC 2017
> -----Original Message-----
> From: Christian König [mailto:deathsimple at vodafone.de]
> Sent: Monday, July 31, 2017 10:13 AM
> To: linux-media at vger.kernel.org; dri-devel at lists.freedesktop.org; linaro-
> mm-sig at lists.linaro.org; Zhou, David(ChunMing); Deucher, Alexander
> Subject: Re: [PATCH] dma-buf: fix reservation_object_wait_timeout_rcu to
> wait correctly v2
>
> Ping, what do you guys think of that?
Seems reasonable to me.
Reviewed-by: Alex Deucher <alexander.deucher at amd.com>
>
> Am 25.07.2017 um 15:35 schrieb Christian König:
> > From: Christian König <christian.koenig at amd.com>
> >
> > With hardware resets in mind it is possible that all shared fences are
> > signaled, but the exlusive isn't. Fix waiting for everything in this situation.
> >
> > v2: make sure we always wait for the exclusive fence
> >
> > Signed-off-by: Christian König <christian.koenig at amd.com>
> > ---
> > drivers/dma-buf/reservation.c | 33 +++++++++++++++------------------
> > 1 file changed, 15 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
> > index 393817e..9d4316d 100644
> > --- a/drivers/dma-buf/reservation.c
> > +++ b/drivers/dma-buf/reservation.c
> > @@ -373,12 +373,25 @@ long reservation_object_wait_timeout_rcu(struct
> reservation_object *obj,
> > long ret = timeout ? timeout : 1;
> >
> > retry:
> > - fence = NULL;
> > shared_count = 0;
> > seq = read_seqcount_begin(&obj->seq);
> > rcu_read_lock();
> >
> > - if (wait_all) {
> > + fence = rcu_dereference(obj->fence_excl);
> > + if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence-
> >flags)) {
> > + if (!dma_fence_get_rcu(fence))
> > + goto unlock_retry;
> > +
> > + if (dma_fence_is_signaled(fence)) {
> > + dma_fence_put(fence);
> > + fence = NULL;
> > + }
> > +
> > + } else {
> > + fence = NULL;
> > + }
> > +
> > + if (!fence && wait_all) {
> > struct reservation_object_list *fobj =
> > rcu_dereference(obj-
> >fence);
> >
> > @@ -405,22 +418,6 @@ long reservation_object_wait_timeout_rcu(struct
> reservation_object *obj,
> > }
> > }
> >
> > - if (!shared_count) {
> > - struct dma_fence *fence_excl = rcu_dereference(obj-
> >fence_excl);
> > -
> > - if (fence_excl &&
> > - !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
> > - &fence_excl->flags)) {
> > - if (!dma_fence_get_rcu(fence_excl))
> > - goto unlock_retry;
> > -
> > - if (dma_fence_is_signaled(fence_excl))
> > - dma_fence_put(fence_excl);
> > - else
> > - fence = fence_excl;
> > - }
> > - }
> > -
> > rcu_read_unlock();
> > if (fence) {
> > if (read_seqcount_retry(&obj->seq, seq)) {
>
More information about the dri-devel
mailing list