<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">
<blockquote type="cite">
<p>What I mean is - should we get rid of
dma_fence_add/remove_callback logic in drm_sched_job_timedout
and do it for each driver in between
<br>
</p>
<p>scheduler deactivation and activation back ?</p>
</blockquote>
<br>
Yes, exactly. That's the reason why I already have a revert for
the patch and remove the dance from drm_sched_job_timedout again.<br>
<br>
Christian.<br>
<br>
<br>
Am 26.11.18 um 20:28 schrieb Grodzovsky, Andrey:<br>
</div>
<blockquote type="cite"
cite="mid:71b1c9db-68ec-d4bf-d125-1aed69769fc1@amd.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p><br>
Actually, after looking again at drm_sched_job_timedout from
which the amdgpu_device_gpu_recover will be called I see that we
already disconnect all the pending scheduler fences from the HW
fence, including<br>
the guilty job. I also see that in drm_sched_job_timedout
job_list_lock is released before calling
sched->ops->timedout_job and then required after, so new
jobs can slip into ring_mirror_list in between.
<br>
</p>
<p>And also i will end up going over the ring_mirror_list twice,
once from amdgpu_device_post_asic_reset and later from
drm_sched_job_timedout - this might cause double fence
processing.<br>
</p>
<p>Isn't it more correct only do the disconnect from HW fence
after the schedules have been stopped and connect back before we
restart the schedulers (as you pointed out here before)
<br>
</p>
<p>What I mean is - should we get rid of
dma_fence_add/remove_callback logic in drm_sched_job_timedout
and do it for each driver in between
<br>
</p>
<p>scheduler deactivation and activation back ?</p>
<p>Andrey<br>
</p>
<br>
<div class="moz-cite-prefix">On 11/22/2018 02:56 PM, Grodzovsky,
Andrey wrote:<br>
</div>
<blockquote type="cite"
cite="mid:329e176f-ab36-fc79-8646-484975ebb8c3@amd.com">
<blockquote type="cite" style="color: #000000;">
<blockquote type="cite" style="color: #000000;">
<blockquote type="cite" style="color: #000000;">
<pre wrap="">Additional to that I would try improve the pre, middle, post handling
towards checking if we made some progress in between.
In other words we stop all schedulers in the pre handling and
disconnect the scheduler fences from the hardware fence like I did in
patch "drm/sched: fix timeout handling v2".
Then before we do the actual reset in the middle handling we check if
the offending job has completed or at least made some progress in the
meantime.
</pre>
</blockquote>
<pre wrap="">I understand how to check if the job completed - if it's fence already
signaled, but how do I test if the job made 'at least some progress' ?
</pre>
</blockquote>
<pre wrap="">Good question. Maybe we can somehow query from the hardware the number
of primitives or pixels processed so far and then compare after a moment?
</pre>
</blockquote>
<pre wrap="">I will check on this later. In the mean while I will update the code
with the proposed per hive locking and I will add the check if the
guilty job completed before ASIC reset skipping the reset if it's did.
Andrey
</pre>
</blockquote>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
</pre>
</blockquote>
<br>
</body>
</html>