<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 2021-04-10 1:34 p.m., Christian
      König wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:b6a24d3f-4fe6-c642-b478-36e386aa906d@gmail.com">Hi
      Andrey,
      <br>
      <br>
      Am 09.04.21 um 20:18 schrieb Andrey Grodzovsky:
      <br>
      <blockquote type="cite">[SNIP]
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <br>
            If we use a list and a flag called 'emit_allowed' under a
            lock such that in amdgpu_fence_emit we lock the list, check
            the flag and if true add the new HW fence to list and
            proceed to HW emition as normal, otherwise return with
            -ENODEV. In amdgpu_pci_remove we take the lock, set the flag
            to false, and then iterate the list and force signal it.
            Will this not prevent any new HW fence creation from now on
            from any place trying to do so ?
            <br>
          </blockquote>
          <br>
          Way to much overhead. The fence processing is intentionally
          lock free to avoid cache line bouncing because the IRQ can
          move from CPU to CPU.
          <br>
          <br>
          We need something which at least the processing of fences in
          the interrupt handler doesn't affect at all.
          <br>
        </blockquote>
        <br>
        <br>
        As far as I see in the code, amdgpu_fence_emit is only called
        from task context. Also, we can skip this list I proposed and
        just use amdgpu_fence_driver_force_completion for each ring to
        signal all created HW fences.
        <br>
      </blockquote>
      <br>
      Ah, wait a second this gave me another idea.
      <br>
      <br>
      See amdgpu_fence_driver_force_completion():
      <br>
      <br>
      amdgpu_fence_write(ring, ring->fence_drv.sync_seq);
      <br>
      <br>
      If we change that to something like:
      <br>
      <br>
      amdgpu_fence_write(ring, ring->fence_drv.sync_seq +
      0x3FFFFFFF);
      <br>
      <br>
      Not only the currently submitted, but also the next 0x3FFFFFFF
      fences will be considered signaled.
      <br>
      <br>
      This basically solves out problem of making sure that new fences
      are also signaled without any additional overhead whatsoever.</blockquote>
    <p><br>
    </p>
    <p>Problem with this is that the act of setting the sync_seq to some
      MAX value alone is not enough, you actually have to call
      amdgpu_fence_process to iterate and signal the fences currently
      stored in ring->fence_drv.fences array and to guarantee that
      once you done your signalling no more HW fences will be added to
      that array anymore. I was thinking to do something like bellow:</p>
    <p>amdgpu_fence_emit()</p>
    <p>{</p>
    <p>    dma_fence_init(fence);<br>
    </p>
    <p>    srcu_read_lock(amdgpu_unplug_srcu)</p>
    <p>    if (!adev->unplug)) {</p>
    <p>        seq = ++ring->fence_drv.sync_seq;<br>
              emit_fence(fence);</p>
    <p>        <b>/* We can't wait forever as the HW might be gone at
        any point*/</b><b><br>
               dma_fence_wait_timeout(old_fence, 5S);</b><br>
              ring->fence_drv.fences[seq &
      ring->fence_drv.num_fences_mask] = fence;<br>
    </p>
    <p>    } else {</p>
    <p>        dma_fence_set_error(fence, -ENODEV);<br>
              DMA_fence_signal(fence)    <br>
    </p>
    <p>    }   <br>
    </p>
    <p>    srcu_read_unlock(amdgpu_unplug_srcu)<br>
          return fence;<br>
    </p>
    <p>}</p>
    <p>amdgpu_pci_remove <br>
    </p>
    <p>{</p>
    <p>    adev->unplug = true;<br>
          synchronize_srcu(amdgpu_unplug_srcu) </p>
    <p>    /* Past this point no more fence are submitted to HW ring and
      hence we can safely call force signal on all that are currently
      there. <br>
           * Any subsequently created  HW fences will be returned
      signaled with an error code right away <br>
           */<br>
    </p>
    <p>    for_each_ring(adev)<br>
              amdgpu_fence_process(ring)</p>
    <p>    drm_dev_unplug(dev);<br>
          Stop schedulers<br>
          cancel_sync(all timers and queued works);<br>
          hw_fini<br>
          unmap_mmio<br>
    </p>
    <p>}</p>
    <p><br>
    </p>
    <p>Andrey</p>
    <p><br>
    </p>
    <blockquote type="cite" cite="mid:b6a24d3f-4fe6-c642-b478-36e386aa906d@gmail.com">
      <br>
      <br>
      <blockquote type="cite">
        <br>
        <blockquote type="cite">
          <blockquote type="cite">
            <blockquote type="cite">
              <br>
              Alternatively grabbing the reset write side and stopping
              and then restarting the scheduler could work as well.
              <br>
              <br>
              Christian.
              <br>
            </blockquote>
            <br>
            <br>
            I didn't get the above and I don't see why I need to reuse
            the GPU reset rw_lock. I rely on the SRCU unplug flag for
            unplug. Also, not clear to me why are we focusing on the
            scheduler threads, any code patch to generate HW fences
            should be covered, so any code leading to amdgpu_fence_emit
            needs to be taken into account such as, direct IB
            submissions, VM flushes e.t.c
            <br>
          </blockquote>
          <br>
          You need to work together with the reset lock anyway, cause a
          hotplug could run at the same time as a reset.
          <br>
        </blockquote>
        <br>
        <br>
        For going my way indeed now I see now that I have to take reset
        write side lock during HW fences signalling in order to protect
        against scheduler/HW fences detachment and reattachment during
        schedulers stop/restart. But if we go with your approach  then
        calling drm_dev_unplug and scoping amdgpu_job_timeout with
        drm_dev_enter/exit should be enough to prevent any concurrent
        GPU resets during unplug. In fact I already do it anyway -
<a class="moz-txt-link-freetext" href="https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Def0ea4dd29ef44d2649c5eda16c8f4869acc36b1&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ceefa9c90ed8c405ec3b708d8fc46daaa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637536728550884740%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=UiNaJE%2BH45iYmbwSDnMSKZS5z0iak0fNlbbfYqKS2Jo%3D&amp;reserved=0">https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Def0ea4dd29ef44d2649c5eda16c8f4869acc36b1&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ceefa9c90ed8c405ec3b708d8fc46daaa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637536728550884740%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=UiNaJE%2BH45iYmbwSDnMSKZS5z0iak0fNlbbfYqKS2Jo%3D&amp;reserved=0</a><br>
      </blockquote>
      <br>
      Yes, good point as well.
      <br>
      <br>
      Christian.
      <br>
      <br>
      <blockquote type="cite">
        <br>
        Andrey
        <br>
        <br>
        <br>
        <blockquote type="cite">
          <br>
          <br>
          Christian.
          <br>
          <br>
          <blockquote type="cite">
            <br>
            Andrey
            <br>
            <br>
            <br>
            <blockquote type="cite">
              <br>
              <blockquote type="cite">
                <br>
                Christian.
                <br>
                <br>
                <blockquote type="cite">
                  <br>
                  Andrey
                  <br>
                  <br>
                  <br>
                  <blockquote type="cite">
                    <br>
                    <blockquote type="cite">Andrey
                      <br>
                      <br>
                      <br>
                    </blockquote>
                    <br>
                  </blockquote>
                </blockquote>
                <br>
              </blockquote>
              <br>
            </blockquote>
          </blockquote>
          <br>
        </blockquote>
      </blockquote>
      <br>
    </blockquote>
  </body>
</html>