<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Am 02.03.22 um 19:11 schrieb Jason Ekstrand:<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Dec 22, 2021 at 4:05
PM Daniel Vetter <<a href="mailto:daniel@ffwll.ch"
moz-do-not-send="true" class="moz-txt-link-freetext">daniel@ffwll.ch</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">On Tue, Dec 07, 2021 at
01:34:07PM +0100, Christian König wrote:<br>
> Add an usage for kernel submissions. Waiting for those<br>
> are mandatory for dynamic DMA-bufs.<br>
> <br>
> Signed-off-by: Christian König <<a
href="mailto:christian.koenig@amd.com" target="_blank"
moz-do-not-send="true" class="moz-txt-link-freetext">christian.koenig@amd.com</a>><br>
<br>
Again just skipping to the doc bikeshedding, maybe with more
cc others<br>
help with some code review too.<br>
<br>
> EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);<br>
> diff --git a/include/linux/dma-resv.h
b/include/linux/dma-resv.h<br>
> index 4f3a6abf43c4..29d799991496 100644<br>
> --- a/include/linux/dma-resv.h<br>
> +++ b/include/linux/dma-resv.h<br>
> @@ -54,8 +54,30 @@ struct dma_resv_list;<br>
> *<br>
> * This enum describes the different use cases for a
dma_resv object and<br>
> * controls which fences are returned when queried.<br>
> + *<br>
> + * An important fact is that there is the order
KERNEL<WRITE<READ and<br>
> + * when the dma_resv object is asked for fences for
one use case the fences<br>
> + * for the lower use case are returned as well.<br>
> + *<br>
> + * For example when asking for WRITE fences then the
KERNEL fences are returned<br>
> + * as well. Similar when asked for READ fences then
both WRITE and KERNEL<br>
> + * fences are returned as well.<br>
> */<br>
> enum dma_resv_usage {<br>
> + /**<br>
> + * @DMA_RESV_USAGE_KERNEL: For in kernel memory
management only.<br>
> + *<br>
> + * This should only be used for things like
copying or clearing memory<br>
> + * with a DMA hardware engine for the purpose of
kernel memory<br>
> + * management.<br>
> + *<br>
> + * Drivers *always* need to wait for those
fences before accessing the<br>
</blockquote>
<div><br>
</div>
<div>super-nit: Your whitespace is wrong here.<br>
</div>
</div>
</div>
</blockquote>
<br>
Fixed, thanks.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
s/need to/must/ to stay with usual RFC wording. It's a hard
requirement or<br>
there's a security bug somewhere.<br>
</blockquote>
<div><br>
</div>
<div>Yeah, probably. I like *must* but that's because that's
what we use in the VK spec. Do whatever's usual for kernel
docs.</div>
</div>
</div>
</blockquote>
<br>
I agree, must sounds better and is already fixed.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>Not sure where to put this comment but I feel like the
way things are framed is a bit the wrong way around.
Specifically, I don't think we should be talking about what
fences you must wait on so much as what fences you can
safely skip. In the previous model, the exclusive fence had
to be waited on at all times and the shared fences could be
skipped unless you were doing something that would result in
a new exclusive fence.</div>
</div>
</div>
</blockquote>
<br>
Well exactly that's what we unfortunately didn't do, as Daniel
explained some drivers just ignored the exclusive fence sometimes.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div>In this new world of "it's just a bucket of fences", we
need to be very sure the waiting is happening on the right
things. It sounds (I could be wrong) like USAGE_KERNEL is
the new exclusive fence. If so, we need to make it
virtually impossible to ignore.</div>
</div>
</div>
</blockquote>
<br>
Yes, exactly that's the goal here.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>Sorry if that's a bit of a ramble. I think what I'm
saying is this: In whatever helpers or iterators we have,
be that get_singleton or iter_begin or whatever, we need to
be sure we specify things in terms of exclusion and not
inclusion. "Give me everything except implicit sync read
fences" rather than "give me implicit sync write fences".</div>
</div>
</div>
</blockquote>
<br>
Mhm, exactly that's what I tried to avoid. The basic idea here is
that the driver and memory management components tells the framework
what use case it has and the framework returns the appropriate
fences for that.<br>
<br>
So when the use case is mmap() the buffer on the CPU without any
further sync (for example) you only get the kernel fences.<br>
<br>
When the use case is you want to add a CS which is an implicit read
you get all kernel fences plus all writers (see function
dma_resv_usage_rw).<br>
<br>
When the use case is you want to add a CS which is an implicit write
you get all kernel fences, other writers as well as readers.<br>
<br>
And last when you are the memory management which wants to move a
buffer around you get everything.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div> If having a single, well-ordered enum is sufficient for
that, great. If we think we'll ever end up with something
other than a strict ordering, we may need to re-think a bit.</div>
</div>
</div>
</blockquote>
<br>
I actually started with a matrix which gives you an indicator when
to sync with what, but at least for now the well-ordered enum seems
to get the job done as well and is far less complex.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div>Concerning well-ordering... I'm a bit surprised to only
see three values here. I expected 4:</div>
<div><br>
</div>
<div> - kernel exclusive, used for memory moves and the like</div>
<div> - kernel shared, used for "I'm using this right now,
don't yank it out from under me" which may not have any
implicit sync implications whatsoever</div>
<div> - implicit sync write</div>
<div>
<div> - implicit sync read</div>
</div>
</div>
</div>
</blockquote>
<br>
See the follow up patch which adds DMA_RESV_USAGE_BOOKKEEP. That's
the 4th one you are missing.<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div>If we had those four, I don't think the strict ordering
works anymore. From the POV of implicit sync, they would
look at the implicit sync read/write fences and maybe not
even kernel exclusive. From the POV of some doing a BO
move, they'd look at all of them. From the POV of holding
on to memory while Vulkan is using it, you want to set a
kernel shared fence but it doesn't need to interact with
implicit sync at all. Am I missing something obvious here?</div>
</div>
</div>
</blockquote>
<br>
Yeah, sounds like you didn't looked at patch 21 :)<br>
<br>
My thinking is more or less exactly the same. Only difference is
that I've put the BOOKKEEP usage after the implicit read and write
usages. This way you can keep the strict ordering since the implicit
submissions won't ask for the BOOKKEEP usage.<br>
<br>
The order is then KERNEL<WRITE<READ<BOOKKEEP. See the final
documentation here as well:<br>
<br>
* An important fact is that there is the order
KERNEL<WRITE<READ<BOOKKEEP and<br>
* when the dma_resv object is asked for fences for one use case the
fences<br>
* for the lower use case are returned as well.<br>
*<br>
* For example when asking for WRITE fences then the KERNEL fences
are returned<br>
* as well. Similar when asked for READ fences then both WRITE and
KERNEL<br>
* fences are returned as well.<br>
<br>
Regards,<br>
Christian.<br>
<br>
<br>
<blockquote type="cite"
cite="mid:CAOFGe96E9fopQ3ipeQWWdnjc54j_gvhb3rrEQ-cbrHW-My035A@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div><br>
</div>
<div>--Jason</div>
<div><br>
</div>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
> + * resource protected by the dma_resv object. The
only exception for<br>
> + * that is when the resource is known to be
locked down in place by<br>
> + * pinning it previously.<br>
<br>
Is this true? This sounds more confusing than helpful,
because afaik in<br>
general our pin interfaces do not block for any kernel
fences. dma_buf_pin<br>
doesn't do that for sure. And I don't think ttm does that
either.<br>
<br>
I think the only safe thing here is to state that it's safe
if a) the<br>
resource is pinned down and b) the callers has previously
waited for the<br>
kernel fences.<br>
<br>
I also think we should put that wait for kernel fences into
dma_buf_pin(),<br>
but that's maybe a later patch.<br>
-Daniel<br>
<br>
<br>
<br>
> + */<br>
> + DMA_RESV_USAGE_KERNEL,<br>
> +<br>
> /**<br>
> * @DMA_RESV_USAGE_WRITE: Implicit write
synchronization.<br>
> *<br>
> -- <br>
> 2.25.1<br>
> <br>
<br>
-- <br>
Daniel Vetter<br>
Software Engineer, Intel Corporation<br>
<a href="http://blog.ffwll.ch" rel="noreferrer"
target="_blank" moz-do-not-send="true"
class="moz-txt-link-freetext">http://blog.ffwll.ch</a><br>
</blockquote>
</div>
</div>
</blockquote>
<br>
</body>
</html>