<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Am 02.03.22 um 18:55 schrieb Jason Ekstrand:<br>
<blockquote type="cite"
cite="mid:CAOFGe97sb3YeN-9a5t8XZKEupYD0Y_sXGNxhDOm-JJBzJd4mOA@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr"><br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Dec 22, 2021 at 4:00
PM Daniel Vetter <<a href="mailto:daniel@ffwll.ch"
moz-do-not-send="true" class="moz-txt-link-freetext">daniel@ffwll.ch</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">On Tue, Dec 07, 2021 at
01:34:05PM +0100, Christian König wrote:<br>
> This change adds the dma_resv_usage enum and allows us
to specify why a<br>
> dma_resv object is queried for its containing fences.<br>
> <br>
> Additional to that a dma_resv_usage_rw() helper
function is added to aid<br>
> retrieving the fences for a read or write userspace
submission.<br>
> <br>
> This is then deployed to the different query functions
of the dma_resv<br>
> object and all of their users. When the write
paratermer was previously<br>
> true we now use DMA_RESV_USAGE_WRITE and
DMA_RESV_USAGE_READ otherwise.<br>
> <br>
> v2: add KERNEL/OTHER in separate patch<br>
> v3: some kerneldoc suggestions by Daniel<br>
> <br>
> Signed-off-by: Christian König <<a
href="mailto:christian.koenig@amd.com" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">christian.koenig@amd.com</a>><br>
<br>
Just commenting on the kerneldoc here. <br>
<br>
</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
> diff --git a/drivers/dma-buf/dma-resv.c
b/drivers/dma-buf/dma-resv.c<br>
> index ecb2ff606bac..33a17db89fb4 100644<br>
> --- a/drivers/dma-buf/dma-resv.c<br>
> +++ b/drivers/dma-buf/dma-resv.c<br>
> @@ -408,7 +408,7 @@ static void
dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)<br>
> cursor->seq =
read_seqcount_begin(&cursor->obj->seq);<br>
> cursor->index = -1;<br>
> cursor->shared_count = 0;<br>
> - if (cursor->all_fences) {<br>
> + if (cursor->usage >= DMA_RESV_USAGE_READ)
{<br>
</blockquote>
<div><br>
</div>
<div>If we're going to do this....<br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
> cursor->fences =
dma_resv_shared_list(cursor->obj);<br>
> if (cursor->fences)<br>
> cursor->shared_count =
cursor->fences->shared_count; </blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"><br>
> diff --git a/include/linux/dma-resv.h
b/include/linux/dma-resv.h<br>
> index 40ac9d486f8f..d96d8ca9af56 100644<br>
> --- a/include/linux/dma-resv.h<br>
> +++ b/include/linux/dma-resv.h<br>
> @@ -49,6 +49,49 @@ extern struct ww_class
reservation_ww_class;<br>
> <br>
> struct dma_resv_list;<br>
> <br>
> +/**<br>
> + * enum dma_resv_usage - how the fences from a
dma_resv obj are used<br>
> + *<br>
</blockquote>
<div><br>
</div>
<div>We probably want a note in here about the ordering of
this enum. I'm not even sure that comparing enum values is
good or that all values will have a strict ordering that can
be useful. It would definitely make me nervous if anything
outside dma-resv.c is doing comparisons on these values.</div>
</div>
</div>
</blockquote>
<br>
Exactly that is added in the follow up patch which adds
DMA_RESV_USAGE_KERNEL.<br>
<br>
I've just had everything in one single patch originally.<br>
<br>
Christian.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe97sb3YeN-9a5t8XZKEupYD0Y_sXGNxhDOm-JJBzJd4mOA@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>--Jason<br>
</div>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
> + * This enum describes the different use cases for a
dma_resv object and<br>
> + * controls which fences are returned when queried.<br>
<br>
We need to link here to both dma_buf.resv and from there to
here.<br>
<br>
Also we had a fair amount of text in the old dma_resv fields
which should<br>
probably be included here.<br>
<br>
> + */<br>
> +enum dma_resv_usage {<br>
> + /**<br>
> + * @DMA_RESV_USAGE_WRITE: Implicit write
synchronization.<br>
> + *<br>
> + * This should only be used for userspace command
submissions which add<br>
> + * an implicit write dependency.<br>
> + */<br>
> + DMA_RESV_USAGE_WRITE,<br>
> +<br>
> + /**<br>
> + * @DMA_RESV_USAGE_READ: Implicit read
synchronization.<br>
> + *<br>
> + * This should only be used for userspace command
submissions which add<br>
> + * an implicit read dependency.<br>
<br>
I think the above would benefit from at least a link each to
&dma_buf.resv<br>
for further discusion.<br>
<br>
Plus the READ flag needs a huge warning that in general it
does _not_<br>
guarantee that neither there's no writes possible, nor that
the writes can<br>
be assumed mistakes and dropped (on buffer moves e.g.).<br>
<br>
Drivers can only make further assumptions for
driver-internal dma_resv<br>
objects (e.g. on vm/pagetables) or when the fences are all
fences of the<br>
same driver (e.g. the special sync rules amd has that takes
the fence<br>
owner into account).<br>
<br>
We have this documented in the dma_buf.resv rules, but since
it came up<br>
again in a discussion with Thomas H. somewhere, it's better
to hammer this<br>
in a few more time. Specically in generally ignoring READ
fences for<br>
buffer moves (well the copy job, memory freeing still has to
wait for all<br>
of them) is a correctness bug.<br>
<br>
Maybe include a big warning that really the difference
between READ and<br>
WRITE should only matter for implicit sync, and _not_ for
anything else<br>
the kernel does.<br>
<br>
I'm assuming the actual replacement is all mechanical, so I
skipped that<br>
one for now, that's for next year :-)<br>
-Daniel<br>
<br>
> + */<br>
> + DMA_RESV_USAGE_READ,<br>
> +};<br>
> +<br>
> +/**<br>
> + * dma_resv_usage_rw - helper for implicit sync<br>
> + * @write: true if we create a new implicit sync write<br>
> + *<br>
> + * This returns the implicit synchronization usage for
write or read accesses,<br>
> + * see enum dma_resv_usage.<br>
> + */<br>
> +static inline enum dma_resv_usage
dma_resv_usage_rw(bool write)<br>
> +{<br>
> + /* This looks confusing at first sight, but is
indeed correct.<br>
> + *<br>
> + * The rational is that new write operations
needs to wait for the<br>
> + * existing read and write operations to finish.<br>
> + * But a new read operation only needs to wait
for the existing write<br>
> + * operations to finish.<br>
> + */<br>
> + return write ? DMA_RESV_USAGE_READ :
DMA_RESV_USAGE_WRITE;<br>
> +}<br>
> +<br>
> /**<br>
> * struct dma_resv - a reservation object manages
fences for a buffer<br>
> *<br>
> @@ -147,8 +190,8 @@ struct dma_resv_iter {<br>
> /** @obj: The dma_resv object we iterate over */<br>
> struct dma_resv *obj;<br>
> <br>
> - /** @all_fences: If all fences should be returned
*/<br>
> - bool all_fences;<br>
> + /** @usage: Controls which fences are returned */<br>
> + enum dma_resv_usage usage;<br>
> <br>
> /** @fence: the currently handled fence */<br>
> struct dma_fence *fence;<br>
> @@ -178,14 +221,14 @@ struct dma_fence
*dma_resv_iter_next(struct dma_resv_iter *cursor);<br>
> * dma_resv_iter_begin - initialize a dma_resv_iter
object<br>
> * @cursor: The dma_resv_iter object to initialize<br>
> * @obj: The dma_resv object which we want to iterate
over<br>
> - * @all_fences: If all fences should be returned or
just the exclusive one<br>
> + * @usage: controls which fences to include, see enum
dma_resv_usage.<br>
> */<br>
> static inline void dma_resv_iter_begin(struct
dma_resv_iter *cursor,<br>
> struct dma_resv
*obj,<br>
> - bool all_fences)<br>
> + enum
dma_resv_usage usage)<br>
> {<br>
> cursor->obj = obj;<br>
> - cursor->all_fences = all_fences;<br>
> + cursor->usage = usage;<br>
> cursor->fence = NULL;<br>
> }<br>
> <br>
> @@ -242,7 +285,7 @@ static inline bool
dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)<br>
> * dma_resv_for_each_fence - fence iterator<br>
> * @cursor: a struct dma_resv_iter pointer<br>
> * @obj: a dma_resv object pointer<br>
> - * @all_fences: true if all fences should be returned<br>
> + * @usage: controls which fences to return<br>
> * @fence: the current fence<br>
> *<br>
> * Iterate over the fences in a struct dma_resv object
while holding the<br>
> @@ -251,8 +294,8 @@ static inline bool
dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)<br>
> * valid as long as the lock is held and so no extra
reference to the fence is<br>
> * taken.<br>
> */<br>
> -#define dma_resv_for_each_fence(cursor, obj,
all_fences, fence) \<br>
> - for (dma_resv_iter_begin(cursor, obj,
all_fences), \<br>
> +#define dma_resv_for_each_fence(cursor, obj, usage,
fence) \<br>
> + for (dma_resv_iter_begin(cursor, obj, usage), \<br>
> fence = dma_resv_iter_first(cursor); fence;
\<br>
> fence = dma_resv_iter_next(cursor))<br>
> <br>
> @@ -419,14 +462,14 @@ void
dma_resv_add_shared_fence(struct dma_resv *obj, struct
dma_fence *fence);<br>
> void dma_resv_replace_fences(struct dma_resv *obj,
uint64_t context,<br>
> struct dma_fence *fence);<br>
> void dma_resv_add_excl_fence(struct dma_resv *obj,
struct dma_fence *fence);<br>
> -int dma_resv_get_fences(struct dma_resv *obj, bool
write,<br>
> +int dma_resv_get_fences(struct dma_resv *obj, enum
dma_resv_usage usage,<br>
> unsigned int *num_fences, struct
dma_fence ***fences);<br>
> -int dma_resv_get_singleton(struct dma_resv *obj, bool
write,<br>
> +int dma_resv_get_singleton(struct dma_resv *obj, enum
dma_resv_usage usage,<br>
> struct dma_fence **fence);<br>
> int dma_resv_copy_fences(struct dma_resv *dst, struct
dma_resv *src);<br>
> -long dma_resv_wait_timeout(struct dma_resv *obj, bool
wait_all, bool intr,<br>
> - unsigned long timeout);<br>
> -bool dma_resv_test_signaled(struct dma_resv *obj, bool
test_all);<br>
> +long dma_resv_wait_timeout(struct dma_resv *obj, enum
dma_resv_usage usage,<br>
> + bool intr, unsigned long
timeout);<br>
> +bool dma_resv_test_signaled(struct dma_resv *obj, enum
dma_resv_usage usage);<br>
> void dma_resv_describe(struct dma_resv *obj, struct
seq_file *seq);<br>
> <br>
> #endif /* _LINUX_RESERVATION_H */<br>
> -- <br>
> 2.25.1<br>
> <br>
<br>
-- <br>
Daniel Vetter<br>
Software Engineer, Intel Corporation<br>
<a href="http://blog.ffwll.ch" rel="noreferrer"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">http://blog.ffwll.ch</a><br>
</blockquote>
</div>
</div>
</blockquote>
<br>
</body>
</html>