<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 2021-04-12 2:23 p.m., Christian
König wrote:<br>
</div>
<blockquote type="cite" cite="mid:2894bf97-8c39-6610-c479-b089c46513e7@amd.com">
Am 12.04.21 um 20:18 schrieb Andrey Grodzovsky:<br>
<blockquote type="cite" cite="mid:ecf465a2-d4fc-1cbf-a9d5-39c3844f23bb@amd.com">
<p>On 2021-04-12 2:05 p.m., Christian König wrote:<br>
</p>
<blockquote type="cite" cite="mid:a970101f-89f1-8bdf-51d9-4a4e5e0f9e9a@amd.com"> Am
12.04.21 um 20:01 schrieb Andrey Grodzovsky:<br>
<blockquote type="cite" cite="mid:aaa2b266-f091-dd9c-e49d-5e528decfbd7@amd.com">
<p>On 2021-04-12 1:44 p.m., Christian König wrote:<br>
</p>
<blockquote type="cite" cite="mid:cd94e02c-11c8-0198-ab70-0ceee54d437b@amd.com"> <br>
Am 12.04.21 um 19:27 schrieb Andrey Grodzovsky:<br>
<blockquote type="cite" cite="mid:80713dbe-411c-d79b-34ba-b67bc3a50dc5@amd.com">
<div class="moz-cite-prefix">On 2021-04-10 1:34 p.m.,
Christian König wrote:<br>
</div>
<blockquote type="cite" cite="mid:b6a24d3f-4fe6-c642-b478-36e386aa906d@gmail.com">Hi
Andrey, <br>
<br>
Am 09.04.21 um 20:18 schrieb Andrey Grodzovsky: <br>
<blockquote type="cite">[SNIP] <br>
<blockquote type="cite">
<blockquote type="cite"> <br>
If we use a list and a flag called
'emit_allowed' under a lock such that in
amdgpu_fence_emit we lock the list, check the
flag and if true add the new HW fence to list
and proceed to HW emition as normal, otherwise
return with -ENODEV. In amdgpu_pci_remove we
take the lock, set the flag to false, and then
iterate the list and force signal it. Will this
not prevent any new HW fence creation from now
on from any place trying to do so ? <br>
</blockquote>
<br>
Way to much overhead. The fence processing is
intentionally lock free to avoid cache line
bouncing because the IRQ can move from CPU to CPU.
<br>
<br>
We need something which at least the processing of
fences in the interrupt handler doesn't affect at
all. <br>
</blockquote>
<br>
<br>
As far as I see in the code, amdgpu_fence_emit is
only called from task context. Also, we can skip
this list I proposed and just use
amdgpu_fence_driver_force_completion for each ring
to signal all created HW fences. <br>
</blockquote>
<br>
Ah, wait a second this gave me another idea. <br>
<br>
See amdgpu_fence_driver_force_completion(): <br>
<br>
amdgpu_fence_write(ring, ring->fence_drv.sync_seq);
<br>
<br>
If we change that to something like: <br>
<br>
amdgpu_fence_write(ring, ring->fence_drv.sync_seq +
0x3FFFFFFF); <br>
<br>
Not only the currently submitted, but also the next
0x3FFFFFFF fences will be considered signaled. <br>
<br>
This basically solves out problem of making sure that
new fences are also signaled without any additional
overhead whatsoever.</blockquote>
<p><br>
</p>
<p>Problem with this is that the act of setting the
sync_seq to some MAX value alone is not enough, you
actually have to call amdgpu_fence_process to iterate
and signal the fences currently stored in
ring->fence_drv.fences array and to guarantee that
once you done your signalling no more HW fences will
be added to that array anymore. I was thinking to do
something like bellow:</p>
</blockquote>
<br>
Well we could implement the is_signaled callback once
more, but I'm not sure if that is a good idea.<br>
</blockquote>
<p><br>
</p>
<p>This indeed could save the explicit signaling I am doing
bellow but I also set an error code there which might be
helpful to propagate to users</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:cd94e02c-11c8-0198-ab70-0ceee54d437b@amd.com"> <br>
<blockquote type="cite" cite="mid:80713dbe-411c-d79b-34ba-b67bc3a50dc5@amd.com">
<p>amdgpu_fence_emit()</p>
<p>{</p>
<p> dma_fence_init(fence);<br>
</p>
<p> srcu_read_lock(amdgpu_unplug_srcu)</p>
<p> if (!adev->unplug)) {</p>
<p> seq = ++ring->fence_drv.sync_seq;<br>
emit_fence(fence);</p>
<p> <b>/* We can't wait forever as the HW might
be gone at any point*/</b><b><br>
dma_fence_wait_timeout(old_fence, 5S);</b><br>
</p>
</blockquote>
<br>
You can pretty much ignore this wait here. It is only as a
last resort so that we never overwrite the ring buffers.<br>
</blockquote>
<p><br>
</p>
<p>If device is present how can I ignore this ?</p>
</blockquote>
</blockquote>
<p><br>
</p>
<p>I think you missed my question here <br>
</p>
</blockquote>
<br>
Sorry I thought I answered that below.<br>
<br>
See this is just the last resort so that we don't need to worry
about ring buffer overflows during testing.<br>
<br>
We should not get here in practice and if we get here generating a
deadlock might actually be the best handling.<br>
<br>
The alternative would be to call BUG().<br>
<br>
<blockquote type="cite" cite="mid:ecf465a2-d4fc-1cbf-a9d5-39c3844f23bb@amd.com">
<p> </p>
<blockquote type="cite" cite="mid:a970101f-89f1-8bdf-51d9-4a4e5e0f9e9a@amd.com">
<blockquote type="cite" cite="mid:aaa2b266-f091-dd9c-e49d-5e528decfbd7@amd.com">
<p><br>
</p>
<blockquote type="cite" cite="mid:cd94e02c-11c8-0198-ab70-0ceee54d437b@amd.com"> <br>
But it should not have a timeout as far as I can see.<br>
</blockquote>
<p><br>
</p>
<p>Without timeout wait the who approach falls apart as I
can't call srcu_synchronize on this scope because once
device is physically gone the wait here will be forever</p>
</blockquote>
<br>
Yeah, but this is intentional. The only alternative to avoid
corruption is to wait with a timeout and call BUG() if that
triggers. That isn't much better.<br>
<br>
<blockquote type="cite" cite="mid:aaa2b266-f091-dd9c-e49d-5e528decfbd7@amd.com">
<p><br>
</p>
<blockquote type="cite" cite="mid:cd94e02c-11c8-0198-ab70-0ceee54d437b@amd.com"> <br>
<blockquote type="cite" cite="mid:80713dbe-411c-d79b-34ba-b67bc3a50dc5@amd.com">
<p> ring->fence_drv.fences[seq &
ring->fence_drv.num_fences_mask] = fence;<br>
</p>
<p> } else {</p>
<p> dma_fence_set_error(fence, -ENODEV);<br>
DMA_fence_signal(fence) <br>
</p>
<p> } <br>
</p>
<p> srcu_read_unlock(amdgpu_unplug_srcu)<br>
return fence;<br>
</p>
<p>}</p>
<p>amdgpu_pci_remove <br>
</p>
<p>{</p>
<p> adev->unplug = true;<br>
synchronize_srcu(amdgpu_unplug_srcu) <br>
</p>
</blockquote>
<br>
Well that is just duplicating what drm_dev_unplug() should
be doing on a different level.<br>
</blockquote>
<p><br>
</p>
<p>drm_dev_unplug is on a much wider scope, for everything
in the device including 'flushing' in flight IOCTLs, this
deals specifically with the issue of force signalling HW
fences</p>
</blockquote>
<br>
Yeah, but it adds the same overhead as the device srcu.<br>
<br>
Christian.<br>
</blockquote>
<p><br>
</p>
<p>So what's the right approach ? How we guarantee that when
running amdgpu_fence_driver_force_completion we will signal
all the HW fences and not racing against some more fences
insertion into that array ?</p>
</blockquote>
<br>
Well I would still say the best approach would be to insert this
between the front end and the backend and not rely on signaling
fences while holding the device srcu.<br>
</blockquote>
<p><br>
</p>
<p>My question is, even now, when we run
amdgpu_fence_driver_fini_hw->amdgpu_fence_wait_empty or
amdgpu_fence_driver_fini_hw->amdgpu_fence_driver_force_completion,
what there prevents a race with another fence being at the same
time emitted and inserted into the fence array ? Looks like
nothing.</p>
<p> <br>
</p>
<blockquote type="cite" cite="mid:2894bf97-8c39-6610-c479-b089c46513e7@amd.com"> <br>
BTW: Could it be that the device SRCU protects more than one
device and we deadlock because of this?<br>
</blockquote>
<p><br>
</p>
<p>I haven't actually experienced any deadlock until now but, yes,
drm_unplug_srcu is defined as static in drm_drv.c and so in the
presence of multiple devices from same or different drivers we in
fact are dependent on all their critical sections i guess.</p>
<p>Andrey</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:2894bf97-8c39-6610-c479-b089c46513e7@amd.com"> <br>
Christian.<br>
<br>
<blockquote type="cite" cite="mid:ecf465a2-d4fc-1cbf-a9d5-39c3844f23bb@amd.com">
<p>Andrey<br>
</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:a970101f-89f1-8bdf-51d9-4a4e5e0f9e9a@amd.com"> <br>
<blockquote type="cite" cite="mid:aaa2b266-f091-dd9c-e49d-5e528decfbd7@amd.com">
<p>Andrey</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:cd94e02c-11c8-0198-ab70-0ceee54d437b@amd.com"> <br>
Christian.<br>
<br>
<blockquote type="cite" cite="mid:80713dbe-411c-d79b-34ba-b67bc3a50dc5@amd.com">
<p> /* Past this point no more fence are submitted to
HW ring and hence we can safely call force signal on
all that are currently there. <br>
* Any subsequently created HW fences will be
returned signaled with an error code right away <br>
*/<br>
</p>
<p> for_each_ring(adev)<br>
amdgpu_fence_process(ring)</p>
<p> drm_dev_unplug(dev);<br>
Stop schedulers<br>
cancel_sync(all timers and queued works);<br>
hw_fini<br>
unmap_mmio<br>
</p>
<p>}</p>
<p><br>
</p>
<p>Andrey</p>
<p><br>
</p>
<blockquote type="cite" cite="mid:b6a24d3f-4fe6-c642-b478-36e386aa906d@gmail.com">
<br>
<br>
<blockquote type="cite"> <br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"> <br>
Alternatively grabbing the reset write side
and stopping and then restarting the scheduler
could work as well. <br>
<br>
Christian. <br>
</blockquote>
<br>
<br>
I didn't get the above and I don't see why I
need to reuse the GPU reset rw_lock. I rely on
the SRCU unplug flag for unplug. Also, not clear
to me why are we focusing on the scheduler
threads, any code patch to generate HW fences
should be covered, so any code leading to
amdgpu_fence_emit needs to be taken into account
such as, direct IB submissions, VM flushes e.t.c
<br>
</blockquote>
<br>
You need to work together with the reset lock
anyway, cause a hotplug could run at the same time
as a reset. <br>
</blockquote>
<br>
<br>
For going my way indeed now I see now that I have to
take reset write side lock during HW fences
signalling in order to protect against scheduler/HW
fences detachment and reattachment during schedulers
stop/restart. But if we go with your approach then
calling drm_dev_unplug and scoping
amdgpu_job_timeout with drm_dev_enter/exit should be
enough to prevent any concurrent GPU resets during
unplug. In fact I already do it anyway - <a class="moz-txt-link-freetext" href="https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Def0ea4dd29ef44d2649c5eda16c8f4869acc36b1&data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ceefa9c90ed8c405ec3b708d8fc46daaa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637536728550884740%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=UiNaJE%2BH45iYmbwSDnMSKZS5z0iak0fNlbbfYqKS2Jo%3D&reserved=0" moz-do-not-send="true">https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Def0ea4dd29ef44d2649c5eda16c8f4869acc36b1&data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ceefa9c90ed8c405ec3b708d8fc46daaa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637536728550884740%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=UiNaJE%2BH45iYmbwSDnMSKZS5z0iak0fNlbbfYqKS2Jo%3D&reserved=0</a><br>
</blockquote>
<br>
Yes, good point as well. <br>
<br>
Christian. <br>
<br>
<blockquote type="cite"> <br>
Andrey <br>
<br>
<br>
<blockquote type="cite"> <br>
<br>
Christian. <br>
<br>
<blockquote type="cite"> <br>
Andrey <br>
<br>
<br>
<blockquote type="cite"> <br>
<blockquote type="cite"> <br>
Christian. <br>
<br>
<blockquote type="cite"> <br>
Andrey <br>
<br>
<br>
<blockquote type="cite"> <br>
<blockquote type="cite">Andrey <br>
<br>
<br>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</body>
</html>