<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Unfortunately as I pointed out to Daniel as well this won't work
100% reliable either.<br>
<br>
See the signal on the ring buffer needs to be protected by
manipulation from userspace so that we can guarantee that the
hardware really has finished executing when it fires.<br>
<br>
Protecting memory by immediate page table updates is a good first
step, but unfortunately not sufficient (and we would need to
restructure large parts of the driver to make this happen).<br>
<br>
On older hardware we often had the situation that for reliable
invalidation we need the guarantee that every previous operation has
finished executing. It's not so much of a problem when the next
operation has already started, since then we had the opportunity to
do things in between the last and the next operation. Just see cache
invalidation and VM switching for example.<br>
<br>
Additional to that it doesn't really buy us anything, e.g. there is
not much advantage to this. Writing the ring buffer in userspace and
then ringing in the kernel has the same overhead as doing everything
in the kernel in the first place.<br>
<br>
Christian.<br>
<br>
<div class="moz-cite-prefix">Am 04.05.21 um 05:11 schrieb Marek
Olšák:<br>
</div>
<blockquote type="cite"
cite="mid:CAAxE2A6NCTFsV6oH=AL=S=P1p0xYF0To8T_THpUO2ypdo0dyBw@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Proposal for a new CS ioctl, kernel pseudo code:</div>
<div><br>
</div>
<div>lock(&global_lock);</div>
<div>serial = get_next_serial(dev);</div>
<div>add_wait_command(ring, serial - 1);</div>
<div>add_exec_cmdbuf(ring, user_cmdbuf);</div>
<div>add_signal_command(ring, serial);</div>
<div>*ring->doorbell = FIRE;<br>
</div>
<div>unlock(&global_lock);</div>
<div><br>
</div>
<div>See? Just like userspace submit, but in the kernel without
concurrency/preemption. Is this now safe enough for dma_fence?<br>
</div>
<div><br>
</div>
<div>Marek<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, May 3, 2021 at 4:36 PM
Marek Olšák <<a href="mailto:maraeo@gmail.com"
moz-do-not-send="true">maraeo@gmail.com</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div>What about direct submit from the kernel where the
process still has write access to the GPU ring buffer but
doesn't use it? I think that solves your preemption
example, but leaves a potential backdoor for a process to
overwrite the signal commands, which shouldn't be a
problem since we are OK with timeouts.<br>
</div>
<div><br>
</div>
<div>Marek<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, May 3, 2021 at
11:23 AM Jason Ekstrand <<a
href="mailto:jason@jlekstrand.net" target="_blank"
moz-do-not-send="true">jason@jlekstrand.net</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">On Mon, May 3, 2021 at
10:16 AM Bas Nieuwenhuizen<br>
<<a href="mailto:bas@basnieuwenhuizen.nl"
target="_blank" moz-do-not-send="true">bas@basnieuwenhuizen.nl</a>>
wrote:<br>
><br>
> On Mon, May 3, 2021 at 5:00 PM Jason Ekstrand <<a
href="mailto:jason@jlekstrand.net" target="_blank"
moz-do-not-send="true">jason@jlekstrand.net</a>>
wrote:<br>
> ><br>
> > Sorry for the top-post but there's no good thing
to reply to here...<br>
> ><br>
> > One of the things pointed out to me recently by
Daniel Vetter that I<br>
> > didn't fully understand before is that dma_buf
has a very subtle<br>
> > second requirement beyond finite time
completion: Nothing required<br>
> > for signaling a dma-fence can allocate memory.
Why? Because the act<br>
> > of allocating memory may wait on your
dma-fence. This, as it turns<br>
> > out, is a massively more strict requirement than
finite time<br>
> > completion and, I think, throws out all of the
proposals we have so<br>
> > far.<br>
> ><br>
> > Take, for instance, Marek's proposal for
userspace involvement with<br>
> > dma-fence by asking the kernel for a next serial
and the kernel<br>
> > trusting userspace to signal it. That doesn't
work at all if<br>
> > allocating memory to trigger a dma-fence can
blow up. There's simply<br>
> > no way for the kernel to trust userspace to not
do ANYTHING which<br>
> > might allocate memory. I don't even think
there's a way userspace can<br>
> > trust itself there. It also blows up my plan of
moving the fences to<br>
> > transition boundaries.<br>
> ><br>
> > Not sure where that leaves us.<br>
><br>
> Honestly the more I look at things I think
userspace-signalable fences<br>
> with a timeout sound like they are a valid solution
for these issues.<br>
> Especially since (as has been mentioned countless
times in this email<br>
> thread) userspace already has a lot of ways to cause
timeouts and or<br>
> GPU hangs through GPU work already.<br>
><br>
> Adding a timeout on the signaling side of a dma_fence
would ensure:<br>
><br>
> - The dma_fence signals in finite time<br>
> - If the timeout case does not allocate memory then
memory allocation<br>
> is not a blocker for signaling.<br>
><br>
> Of course you lose the full dependency graph and we
need to make sure<br>
> garbage collection of fences works correctly when we
have cycles.<br>
> However, the latter sounds very doable and the first
sounds like it is<br>
> to some extent inevitable.<br>
><br>
> I feel like I'm missing some requirement here given
that we<br>
> immediately went to much more complicated things but
can't find it.<br>
> Thoughts?<br>
<br>
Timeouts are sufficient to protect the kernel but they
make the fences<br>
unpredictable and unreliable from a userspace PoV. One of
the big<br>
problems we face is that, once we expose a dma_fence to
userspace,<br>
we've allowed for some pretty crazy potential dependencies
that<br>
neither userspace nor the kernel can sort out. Say you
have marek's<br>
"next serial, please" proposal and a multi-threaded
application.<br>
Between time time you ask the kernel for a serial and get
a dma_fence<br>
and submit the work to signal that serial, your process
may get<br>
preempted, something else shoved in which allocates
memory, and then<br>
we end up blocking on that dma_fence. There's no way
userspace can<br>
predict and defend itself from that.<br>
<br>
So I think where that leaves us is that there is no safe
place to<br>
create a dma_fence except for inside the ioctl which
submits the work<br>
and only after any necessary memory has been allocated.
That's a<br>
pretty stiff requirement. We may still be able to
interact with<br>
userspace a bit more explicitly but I think it throws any
notion of<br>
userspace direct submit out the window.<br>
<br>
--Jason<br>
<br>
<br>
> - Bas<br>
> ><br>
> > --Jason<br>
> ><br>
> > On Mon, May 3, 2021 at 9:42 AM Alex Deucher <<a
href="mailto:alexdeucher@gmail.com" target="_blank"
moz-do-not-send="true">alexdeucher@gmail.com</a>>
wrote:<br>
> > ><br>
> > > On Sat, May 1, 2021 at 6:27 PM Marek Olšák
<<a href="mailto:maraeo@gmail.com" target="_blank"
moz-do-not-send="true">maraeo@gmail.com</a>> wrote:<br>
> > > ><br>
> > > > On Wed, Apr 28, 2021 at 5:07 AM Michel
Dänzer <<a href="mailto:michel@daenzer.net"
target="_blank" moz-do-not-send="true">michel@daenzer.net</a>>
wrote:<br>
> > > >><br>
> > > >> On 2021-04-28 8:59 a.m., Christian
König wrote:<br>
> > > >> > Hi Dave,<br>
> > > >> ><br>
> > > >> > Am 27.04.21 um 21:23 schrieb
Marek Olšák:<br>
> > > >> >> Supporting interop with
any device is always possible. It depends on which drivers
we need to interoperate with and update them. We've
already found the path forward for amdgpu. We just need to
find out how many other drivers need to be updated and
evaluate the cost/benefit aspect.<br>
> > > >> >><br>
> > > >> >> Marek<br>
> > > >> >><br>
> > > >> >> On Tue, Apr 27, 2021 at
2:38 PM Dave Airlie <<a href="mailto:airlied@gmail.com"
target="_blank" moz-do-not-send="true">airlied@gmail.com</a>
<mailto:<a href="mailto:airlied@gmail.com"
target="_blank" moz-do-not-send="true">airlied@gmail.com</a>>>
wrote:<br>
> > > >> >><br>
> > > >> >> On Tue, 27 Apr 2021
at 22:06, Christian König<br>
> > > >> >> <<a
href="mailto:ckoenig.leichtzumerken@gmail.com"
target="_blank" moz-do-not-send="true">ckoenig.leichtzumerken@gmail.com</a>
<mailto:<a
href="mailto:ckoenig.leichtzumerken@gmail.com"
target="_blank" moz-do-not-send="true">ckoenig.leichtzumerken@gmail.com</a>>>
wrote:<br>
> > > >> >> ><br>
> > > >> >> > Correct, we
wouldn't have synchronization between device with and
without user queues any more.<br>
> > > >> >> ><br>
> > > >> >> > That could only
be a problem for A+I Laptops.<br>
> > > >> >><br>
> > > >> >> Since I think you
mentioned you'd only be enabling this on newer<br>
> > > >> >> chipsets, won't it be
a problem for A+A where one A is a generation<br>
> > > >> >> behind the other?<br>
> > > >> >><br>
> > > >> ><br>
> > > >> > Crap, that is a good point as
well.<br>
> > > >> ><br>
> > > >> >><br>
> > > >> >> I'm not really liking
where this is going btw, seems like a ill<br>
> > > >> >> thought out concept,
if AMD is really going down the road of designing<br>
> > > >> >> hw that is currently
Linux incompatible, you are going to have to<br>
> > > >> >> accept a big part of
the burden in bringing this support in to more<br>
> > > >> >> than just amd drivers
for upcoming generations of gpu.<br>
> > > >> >><br>
> > > >> ><br>
> > > >> > Well we don't really like
that either, but we have no other option as far as I can
see.<br>
> > > >><br>
> > > >> I don't really understand what
"future hw may remove support for kernel queues" means
exactly. While the per-context queues can be mapped to
userspace directly, they don't *have* to be, do they? I.e.
the kernel driver should be able to either intercept
userspace access to the queues, or in the worst case do it
all itself, and provide the existing synchronization
semantics as needed?<br>
> > > >><br>
> > > >> Surely there are resource limits
for the per-context queues, so the kernel driver needs to
do some kind of virtualization / multi-plexing anyway, or
we'll get sad user faces when there's no queue available
for <current hot game>.<br>
> > > >><br>
> > > >> I'm probably missing something
though, awaiting enlightenment. :)<br>
> > > ><br>
> > > ><br>
> > > > The hw interface for userspace is that
the ring buffer is mapped to the process address space
alongside a doorbell aperture (4K page) that isn't real
memory, but when the CPU writes into it, it tells the hw
scheduler that there are new GPU commands in the ring
buffer. Userspace inserts all the wait, draw, and signal
commands into the ring buffer and then "rings" the
doorbell. It's my understanding that the ring buffer and
the doorbell are always mapped in the same GPU address
space as the process, which makes it very difficult to
emulate the current protected ring buffers in the kernel.
The VMID of the ring buffer is also not changeable.<br>
> > > ><br>
> > ><br>
> > > The doorbell does not have to be mapped
into the process's GPU virtual<br>
> > > address space. The CPU could write to it
directly. Mapping it into<br>
> > > the GPU's virtual address space would allow
you to have a device kick<br>
> > > off work however rather than the CPU.
E.g., the GPU could kick off<br>
> > > it's own work or multiple devices could
kick off work without CPU<br>
> > > involvement.<br>
> > ><br>
> > > Alex<br>
> > ><br>
> > ><br>
> > > > The hw scheduler doesn't do any
synchronization and it doesn't see any dependencies. It
only chooses which queue to execute, so it's really just a
simple queue manager handling the virtualization aspect
and not much else.<br>
> > > ><br>
> > > > Marek<br>
> > > >
_______________________________________________<br>
> > > > dri-devel mailing list<br>
> > > > <a
href="mailto:dri-devel@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">dri-devel@lists.freedesktop.org</a><br>
> > > > <a
href="https://lists.freedesktop.org/mailman/listinfo/dri-devel"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/dri-devel</a><br>
> > >
_______________________________________________<br>
> > > mesa-dev mailing list<br>
> > > <a
href="mailto:mesa-dev@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">mesa-dev@lists.freedesktop.org</a><br>
> > > <a
href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/mesa-dev</a><br>
> > _______________________________________________<br>
> > dri-devel mailing list<br>
> > <a
href="mailto:dri-devel@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">dri-devel@lists.freedesktop.org</a><br>
> > <a
href="https://lists.freedesktop.org/mailman/listinfo/dri-devel"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/dri-devel</a><br>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>