[PATCH 18/24] dma-buf: add enum dma_resv_usage v3
Christian König
ckoenig.leichtzumerken at gmail.com
Thu Mar 3 13:13:15 UTC 2022
Am 02.03.22 um 18:55 schrieb Jason Ekstrand:
>
> On Wed, Dec 22, 2021 at 4:00 PM Daniel Vetter <daniel at ffwll.ch> wrote:
>
> On Tue, Dec 07, 2021 at 01:34:05PM +0100, Christian König wrote:
> > This change adds the dma_resv_usage enum and allows us to
> specify why a
> > dma_resv object is queried for its containing fences.
> >
> > Additional to that a dma_resv_usage_rw() helper function is
> added to aid
> > retrieving the fences for a read or write userspace submission.
> >
> > This is then deployed to the different query functions of the
> dma_resv
> > object and all of their users. When the write paratermer was
> previously
> > true we now use DMA_RESV_USAGE_WRITE and DMA_RESV_USAGE_READ
> otherwise.
> >
> > v2: add KERNEL/OTHER in separate patch
> > v3: some kerneldoc suggestions by Daniel
> >
> > Signed-off-by: Christian König <christian.koenig at amd.com>
>
> Just commenting on the kerneldoc here.
>
> > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> > index ecb2ff606bac..33a17db89fb4 100644
> > --- a/drivers/dma-buf/dma-resv.c
> > +++ b/drivers/dma-buf/dma-resv.c
> > @@ -408,7 +408,7 @@ static void
> dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)
> > cursor->seq = read_seqcount_begin(&cursor->obj->seq);
> > cursor->index = -1;
> > cursor->shared_count = 0;
> > - if (cursor->all_fences) {
> > + if (cursor->usage >= DMA_RESV_USAGE_READ) {
>
>
> If we're going to do this....
>
> > cursor->fences = dma_resv_shared_list(cursor->obj);
> > if (cursor->fences)
> > cursor->shared_count =
> cursor->fences->shared_count;
>
>
> > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> > index 40ac9d486f8f..d96d8ca9af56 100644
> > --- a/include/linux/dma-resv.h
> > +++ b/include/linux/dma-resv.h
> > @@ -49,6 +49,49 @@ extern struct ww_class reservation_ww_class;
> >
> > struct dma_resv_list;
> >
> > +/**
> > + * enum dma_resv_usage - how the fences from a dma_resv obj are
> used
> > + *
>
>
> We probably want a note in here about the ordering of this enum. I'm
> not even sure that comparing enum values is good or that all values
> will have a strict ordering that can be useful. It would definitely
> make me nervous if anything outside dma-resv.c is doing comparisons on
> these values.
Exactly that is added in the follow up patch which adds
DMA_RESV_USAGE_KERNEL.
I've just had everything in one single patch originally.
Christian.
>
> --Jason
>
> > + * This enum describes the different use cases for a dma_resv
> object and
> > + * controls which fences are returned when queried.
>
> We need to link here to both dma_buf.resv and from there to here.
>
> Also we had a fair amount of text in the old dma_resv fields which
> should
> probably be included here.
>
> > + */
> > +enum dma_resv_usage {
> > + /**
> > + * @DMA_RESV_USAGE_WRITE: Implicit write synchronization.
> > + *
> > + * This should only be used for userspace command
> submissions which add
> > + * an implicit write dependency.
> > + */
> > + DMA_RESV_USAGE_WRITE,
> > +
> > + /**
> > + * @DMA_RESV_USAGE_READ: Implicit read synchronization.
> > + *
> > + * This should only be used for userspace command
> submissions which add
> > + * an implicit read dependency.
>
> I think the above would benefit from at least a link each to
> &dma_buf.resv
> for further discusion.
>
> Plus the READ flag needs a huge warning that in general it does _not_
> guarantee that neither there's no writes possible, nor that the
> writes can
> be assumed mistakes and dropped (on buffer moves e.g.).
>
> Drivers can only make further assumptions for driver-internal dma_resv
> objects (e.g. on vm/pagetables) or when the fences are all fences
> of the
> same driver (e.g. the special sync rules amd has that takes the fence
> owner into account).
>
> We have this documented in the dma_buf.resv rules, but since it
> came up
> again in a discussion with Thomas H. somewhere, it's better to
> hammer this
> in a few more time. Specically in generally ignoring READ fences for
> buffer moves (well the copy job, memory freeing still has to wait
> for all
> of them) is a correctness bug.
>
> Maybe include a big warning that really the difference between
> READ and
> WRITE should only matter for implicit sync, and _not_ for anything
> else
> the kernel does.
>
> I'm assuming the actual replacement is all mechanical, so I
> skipped that
> one for now, that's for next year :-)
> -Daniel
>
> > + */
> > + DMA_RESV_USAGE_READ,
> > +};
> > +
> > +/**
> > + * dma_resv_usage_rw - helper for implicit sync
> > + * @write: true if we create a new implicit sync write
> > + *
> > + * This returns the implicit synchronization usage for write or
> read accesses,
> > + * see enum dma_resv_usage.
> > + */
> > +static inline enum dma_resv_usage dma_resv_usage_rw(bool write)
> > +{
> > + /* This looks confusing at first sight, but is indeed correct.
> > + *
> > + * The rational is that new write operations needs to wait
> for the
> > + * existing read and write operations to finish.
> > + * But a new read operation only needs to wait for the
> existing write
> > + * operations to finish.
> > + */
> > + return write ? DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE;
> > +}
> > +
> > /**
> > * struct dma_resv - a reservation object manages fences for a
> buffer
> > *
> > @@ -147,8 +190,8 @@ struct dma_resv_iter {
> > /** @obj: The dma_resv object we iterate over */
> > struct dma_resv *obj;
> >
> > - /** @all_fences: If all fences should be returned */
> > - bool all_fences;
> > + /** @usage: Controls which fences are returned */
> > + enum dma_resv_usage usage;
> >
> > /** @fence: the currently handled fence */
> > struct dma_fence *fence;
> > @@ -178,14 +221,14 @@ struct dma_fence
> *dma_resv_iter_next(struct dma_resv_iter *cursor);
> > * dma_resv_iter_begin - initialize a dma_resv_iter object
> > * @cursor: The dma_resv_iter object to initialize
> > * @obj: The dma_resv object which we want to iterate over
> > - * @all_fences: If all fences should be returned or just the
> exclusive one
> > + * @usage: controls which fences to include, see enum
> dma_resv_usage.
> > */
> > static inline void dma_resv_iter_begin(struct dma_resv_iter
> *cursor,
> > struct dma_resv *obj,
> > - bool all_fences)
> > + enum dma_resv_usage usage)
> > {
> > cursor->obj = obj;
> > - cursor->all_fences = all_fences;
> > + cursor->usage = usage;
> > cursor->fence = NULL;
> > }
> >
> > @@ -242,7 +285,7 @@ static inline bool
> dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
> > * dma_resv_for_each_fence - fence iterator
> > * @cursor: a struct dma_resv_iter pointer
> > * @obj: a dma_resv object pointer
> > - * @all_fences: true if all fences should be returned
> > + * @usage: controls which fences to return
> > * @fence: the current fence
> > *
> > * Iterate over the fences in a struct dma_resv object while
> holding the
> > @@ -251,8 +294,8 @@ static inline bool
> dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
> > * valid as long as the lock is held and so no extra reference
> to the fence is
> > * taken.
> > */
> > -#define dma_resv_for_each_fence(cursor, obj, all_fences,
> fence) \
> > - for (dma_resv_iter_begin(cursor, obj, all_fences), \
> > +#define dma_resv_for_each_fence(cursor, obj, usage, fence) \
> > + for (dma_resv_iter_begin(cursor, obj, usage), \
> > fence = dma_resv_iter_first(cursor); fence; \
> > fence = dma_resv_iter_next(cursor))
> >
> > @@ -419,14 +462,14 @@ void dma_resv_add_shared_fence(struct
> dma_resv *obj, struct dma_fence *fence);
> > void dma_resv_replace_fences(struct dma_resv *obj, uint64_t
> context,
> > struct dma_fence *fence);
> > void dma_resv_add_excl_fence(struct dma_resv *obj, struct
> dma_fence *fence);
> > -int dma_resv_get_fences(struct dma_resv *obj, bool write,
> > +int dma_resv_get_fences(struct dma_resv *obj, enum
> dma_resv_usage usage,
> > unsigned int *num_fences, struct dma_fence
> ***fences);
> > -int dma_resv_get_singleton(struct dma_resv *obj, bool write,
> > +int dma_resv_get_singleton(struct dma_resv *obj, enum
> dma_resv_usage usage,
> > struct dma_fence **fence);
> > int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv
> *src);
> > -long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all,
> bool intr,
> > - unsigned long timeout);
> > -bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);
> > +long dma_resv_wait_timeout(struct dma_resv *obj, enum
> dma_resv_usage usage,
> > + bool intr, unsigned long timeout);
> > +bool dma_resv_test_signaled(struct dma_resv *obj, enum
> dma_resv_usage usage);
> > void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq);
> >
> > #endif /* _LINUX_RESERVATION_H */
> > --
> > 2.25.1
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20220303/5757947e/attachment-0001.htm>
More information about the dri-devel
mailing list