<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <br>
    <div class="moz-cite-prefix">On 2021-05-28 11:23 a.m., Christian
      König wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:58d8ea74-9e74-540e-4845-55ac88746693@gmail.com">
      
      <br>
      <br>
      <div class="moz-cite-prefix">Am 27.05.21 um 16:05 schrieb philip
        yang:<br>
      </div>
      <blockquote type="cite" cite="mid:9a71e303-f582-f658-f62c-dcda29c182d3@amd.com">
        <p><br>
        </p>
        <div class="moz-cite-prefix">On 2021-05-26 5:25 p.m., Felix
          Kuehling wrote:<br>
        </div>
        <blockquote type="cite" cite="mid:80a52ee7-0a94-0861-128e-ab23d209987e@amd.com">
          <pre class="moz-quote-pre" wrap="">Am 2021-05-26 um 3:21 p.m. schrieb Eric Huang:
</pre>
          <blockquote type="cite">
            <pre class="moz-quote-pre" wrap="">On 2021-05-25 3:16 p.m., Felix Kuehling wrote:
</pre>
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">Similar to a recent fix by Philip Yang 76e08b37d0aa ("drm/amdgpu: flush
TLB if valid PDE turns into PTE"), there needs to be a conditional TLB
flush after map, if any PDEs were unmapped and turned into PTEs in the
process. This is currently returned by amdgpu_vm_bo_update_mapping in
the "table_freed" parameter. This needs to be also returned by
amdgpu_vm_bo_update and reported back to KFD, so KFD can do the TLB
flush after map, if needed.
</pre>
            </blockquote>
            <pre class="moz-quote-pre" wrap="">I follow up your suggestion to create another patch (attached) and
test it. It seems it doesn't improve the latency when memory size is
bigger than huge page (2M), because table_freed parameter will always
be true when mapping page is huge page size. I think Philip's patch is
to fix the case of remapping memory from small page to huge page in
HMM, but it doesn't consider if the memory is remapped and arbitrarily
flushes TLBs when mapping huge page.
</pre>
          </blockquote>
          <pre class="moz-quote-pre" wrap="">That's unexpected. Turning an invalid PDE into a valid (huge) PTE should
not trigger a TLB flush.</pre>
        </blockquote>
        <p>table_freed will be true if PDE has been used by previous
          mapping, unmap the previous mapping will clear the PTEs, leave
          PDE unchanged as P=0, V=1 (in memory and TLB), then huge page
          mapping turns PDE to PTE (P=1, V=1) in memory, and free PTE
          page.</p>
      </blockquote>
      <br>
      I think there might be a little bug in your patch. See we set
      params.table_freed to true when we call amdgpu_vm_free_pts(), but
      amdgpu_vm_free_pts() doesn't necessary frees anything.<br>
      <br>
      It can be that all subsequent page tables where never allocated
      before.<br>
      <br>
      Christian.<br>
    </blockquote>
    <br>
    After I printed infos in function amdgpu_vm_update_ptes(), when we
    map a memory with size 2M(huge page), the function will allocate 9
    ptes (2M == PAGE_SIZE << 9) , until check "if (frag >=
    parent_shift)", then cursor goes up one level to PDE0 and frees all
    9 ptes. So that is why table_freed is always true when mapping
    memory which size is bigger than 2M.<br>
    <br>
    I will add some codes to check if PDE entry is valid before
    amdgpu_vm_update_flags(), and set table_freed accordingly. That will
    fix exactly page fault in the corner case above Philip mentioned.<br>
    <br>
    Regards,<br>
    Eric<br>
    <br>
    <blockquote type="cite" cite="mid:58d8ea74-9e74-540e-4845-55ac88746693@gmail.com"> <br>
      <blockquote type="cite" cite="mid:9a71e303-f582-f658-f62c-dcda29c182d3@amd.com">
        <p>For example, test map 0x7ffe37401000, unmap it, and then map
          0x7ffe3740000 2MB huge page, table_freed will be true, means
          that flush TLB is needed after mapping huge page.</p>
        <p>You can change the test, don't unmap previous mapping, then
          2MB huge page will get new GPU virtual address, or closeKFD,
          openKFD again to create new GPU vm.</p>
        <p>Regards,</p>
        <p>Philip<br>
        </p>
        <blockquote type="cite" cite="mid:80a52ee7-0a94-0861-128e-ab23d209987e@amd.com">
          <pre class="moz-quote-pre" wrap="">Regards,
  Felix


</pre>
          <blockquote type="cite">
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">kfd_flush_tlb probably needs a new parameter to determine the flush
type. The flush after map can be a "legacy" flush (type 0). The flush
after unmap must be a "heavy-weight" flush (type 2) to make sure we
don't evict cache lines into pages that we no longer own.

Finally, in the ticket I thought about possible optimizations using a
worker to minimize the impact of TLB flushes on unmap latency. That
could be a follow up commit.
</pre>
            </blockquote>
            <pre class="moz-quote-pre" wrap="">It is a good idea to use worker, but how do we grantee it done before
memory is remapped? if remapping depends on it, then more latency will
be introduced in map.

Regards,
Eric
</pre>
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">Regards,
   Felix


Am 2021-05-25 um 1:53 p.m. schrieb Eric Huang:
</pre>
              <blockquote type="cite">
                <pre class="moz-quote-pre" wrap="">It it to optimize memory allocation latency.

Signed-off-by: Eric Huang <a class="moz-txt-link-rfc2396E" href="mailto:jinhuieric.huang@amd.com" moz-do-not-send="true"><jinhuieric.huang@amd.com></a>

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 960913a35ee4..ab73741edb97 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -1657,20 +1657,6 @@ static int kfd_ioctl_map_memory_to_gpu(struct
file *filep,
                 goto sync_memory_failed;
         }

-       /* Flush TLBs after waiting for the page table updates to
complete */
-       for (i = 0; i < args->n_devices; i++) {
-               peer = kfd_device_by_id(devices_arr[i]);
-               if (WARN_ON_ONCE(!peer))
-                       continue;
-               peer_pdd = kfd_get_process_device_data(peer, p);
-               if (WARN_ON_ONCE(!peer_pdd))
-                       continue;
-               if (!amdgpu_read_lock(peer->ddev, true)) {
-                       kfd_flush_tlb(peer_pdd);
-                       amdgpu_read_unlock(peer->ddev);
-               }
-       }
-
         kfree(devices_arr);

         trace_kfd_map_memory_to_gpu_end(p,
@@ -1766,6 +1752,7 @@ static int
kfd_ioctl_unmap_memory_from_gpu(struct file *filep,
                         amdgpu_read_unlock(peer->ddev);
                         goto unmap_memory_from_gpu_failed;
                 }
+               kfd_flush_tlb(peer_pdd);
                 amdgpu_read_unlock(peer->ddev);
                 args->n_success = i+1;
         }
_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org" moz-do-not-send="true">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cjinhuieric.huang%40amd.com%7C3e83cc60e2484221222208d921ec7f94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637578121911935770%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jv2krgtmaNs8roI8dTB7eZuD7cjbpbN99%2F%2FJs8fbQpM%3D&reserved=0" originalsrc="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" shash="huL8/ofQQTpEsyQWzaGxg/l8WSS2X6tLX2yN3DgHef5XsPBmEd4OYeab7abHDZGh78+25UM5lD6I+bw+tVq2NyLUJm30xppL14Vd74a8URO9dQKv+t2tYLEsgz5Q4bZbq6foSl2hP8SKBEGTz5pEXrUpgFRV8hbNd3zgn72jxZA=" moz-do-not-send="true">https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cphilip.yang%40amd.com%7C92ac3fbce9264fbcf40508d9208cc477%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637576611241705305%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=S8NSZRdXq%2B74tSSLkm2TYEVDr%2Fr%2BW%2FET7CJln7tbEQo%3D&amp;reserved=0</a>
</pre>
              </blockquote>
            </blockquote>
          </blockquote>
          <pre class="moz-quote-pre" wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org" moz-do-not-send="true">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cjinhuieric.huang%40amd.com%7C3e83cc60e2484221222208d921ec7f94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637578121911945763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qUNO6I2nWDUGHq5MpxM6SAH%2FEUItwI0l9vYsJMrse5g%3D&reserved=0" originalsrc="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" shash="XLnkfgaOO5FJVn1m0uVIwofCzuBzOVhlSlPKdjmVKP8kd9pZ3CeIhKwdC+I3/N7ba5JHgFIc5B1qXH6PaKeMcxa/0rQavT3d7jv0cJZZa6e6s5aWyco8ZzlCieR6cHQz1H5bZFn5Hwe9PIjXMW7os9i1wVC1C8zECrKFaCX3SdQ=" moz-do-not-send="true">https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cphilip.yang%40amd.com%7C92ac3fbce9264fbcf40508d9208cc477%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637576611241705305%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=S8NSZRdXq%2B74tSSLkm2TYEVDr%2Fr%2BW%2FET7CJln7tbEQo%3D&amp;reserved=0</a>
</pre>
        </blockquote>
        <br>
        <fieldset class="mimeAttachmentHeader"></fieldset>
        <pre class="moz-quote-pre" wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org" moz-do-not-send="true">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cjinhuieric.huang%40amd.com%7C3e83cc60e2484221222208d921ec7f94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637578121911955760%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qxgtG0hs5LaiJV7IXqKRKPz9G3mhrB5jF2qDmAp3T2Y%3D&reserved=0" originalsrc="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" shash="nJLONqe+rYIAzq4EBGT7l2kAPIGBCCnYyHhATA5QIoqJlD+gP0oEASfJGX77PvnBcXfU8wUYJjMtsSlXBpfzipfCd5g0zrIbTYw50kVgUxAEcIu+Lnrq0vboj+ZUv1GF3bicKqkbQvM7d0IymxEjf4a2lYUxKqkuu9KH/l121KA=" moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
</pre>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>