<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 2023-09-26 16:43, Chen, Xiaogang
      wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:1d6af500-2a17-0f95-3c86-024cdded0fa9@amd.com">
      <br>
      On 9/22/2023 4:37 PM, Philip Yang wrote:
      <br>
      <blockquote type="cite">Caution: This message originated from an
        External Source. Use proper caution when opening attachments,
        clicking links, or responding.
        <br>
        <br>
        <br>
        Otherwise kfd flush tlb does nothing if vm update fence callback
        doesn't
        <br>
        update vm->tlb_seq. H/W will generate retry fault again.
        <br>
        <br>
        This works now because retry fault keep coming, recover will
        update page
        <br>
        table again after AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING timeout
        and flush
        <br>
        tlb.
        <br>
      </blockquote>
      <br>
      I think what this patch does is waiting vm->last_update fence
      at gpu page fault retry handler. I do not know what bug it tries
      to fix. h/w will keep generating retry fault as long as vm page
      table is not setup correctly, no matter kfd driver waits the fence
      or not. vm page table eventually will be setup.
      <br>
    </blockquote>
    <p>This issue was there, I notice it when implementing the
      granularity bitmap_mapped flag for mGPUs, to skip the retry fault
      if prange mapped on the GPU. The retry fault keep coming after
      updating GPU page table, because restore_pages ->
      svm_range_validate_and_map doesn't wait for vm update fence before
      kfd_flush_tlb.<br>
    </p>
    <p>It is working now because we handle the same retry fault again
      after timeout AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING, and
      kfd_flush_tlb does flush on the second time.<br>
    </p>
    <p>The issue only exist if using sdma update GPU page table, as no
      fence if cpu update GPU page table.</p>
    <p>There are several todo items to optimize this further:<br>
    </p>
    <p>A. After updating GPU page table, we only wait for fence and
      flush tlb if updating existing mapping, or vm params.table_freed
      (this needs amdgpu vm interface change).<br>
    </p>
    <p>B. Use sync to wait mGPUs update fences.</p>
    <p>C. Use multiple workers to handle restore_pages.<br>
    </p>
    <blockquote type="cite" cite="mid:1d6af500-2a17-0f95-3c86-024cdded0fa9@amd.com">
      <br>
      There is a consequence I saw: if we wait vm page table update
      fence it will delay gpu page fault handler exit. Then more h/w
      interrupt vectors will be sent to sw ring, potentially cause the
      ring overflow.
      <br>
    </blockquote>
    <p>retry CAM filter, or sw filter drop the duplicate retry fault, to
      prevent sw ring overflow.</p>
    <p>Regards,</p>
    <p>Philip<br>
    </p>
    <blockquote type="cite" cite="mid:1d6af500-2a17-0f95-3c86-024cdded0fa9@amd.com">
      <br>
      Regards
      <br>
      <br>
      Xiaogang
      <br>
      <br>
      <blockquote type="cite">Remove wait parameter in
        svm_range_validate_and_map because it is
        <br>
        always called with true.
        <br>
        <br>
        Signed-off-by: Philip Yang <a class="moz-txt-link-rfc2396E" href="mailto:Philip.Yang@amd.com"><Philip.Yang@amd.com></a>
        <br>
        ---
        <br>
          drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 15 +++++++--------
        <br>
          1 file changed, 7 insertions(+), 8 deletions(-)
        <br>
        <br>
        diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
        b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
        <br>
        index 70aa882636ab..61f4de1633a8 100644
        <br>
        --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
        <br>
        +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
        <br>
        @@ -1447,7 +1447,7 @@ svm_range_map_to_gpu(struct
        kfd_process_device *pdd, struct svm_range *prange,
        <br>
          static int
        <br>
          svm_range_map_to_gpus(struct svm_range *prange, unsigned long
        offset,
        <br>
                               unsigned long npages, bool readonly,
        <br>
        -                     unsigned long *bitmap, bool wait, bool
        flush_tlb)
        <br>
        +                     unsigned long *bitmap, bool flush_tlb)
        <br>
          {
        <br>
                 struct kfd_process_device *pdd;
        <br>
                 struct amdgpu_device *bo_adev = NULL;
        <br>
        @@ -1480,8 +1480,7 @@ svm_range_map_to_gpus(struct svm_range
        *prange, unsigned long offset,
        <br>
        <br>
                         r = svm_range_map_to_gpu(pdd, prange, offset,
        npages, readonly,
        <br>
                                                 
        prange->dma_addr[gpuidx],
        <br>
        -                                        bo_adev, wait ?
        &fence : NULL,
        <br>
        -                                        flush_tlb);
        <br>
        +                                        bo_adev, &fence,
        flush_tlb);
        <br>
                         if (r)
        <br>
                                 break;
        <br>
        <br>
        @@ -1605,7 +1604,7 @@ static void *kfd_svm_page_owner(struct
        kfd_process *p, int32_t gpuidx)
        <br>
           */
        <br>
          static int svm_range_validate_and_map(struct mm_struct *mm,
        <br>
                                               struct svm_range *prange,
        int32_t gpuidx,
        <br>
        -                                     bool intr, bool wait, bool
        flush_tlb)
        <br>
        +                                     bool intr, bool flush_tlb)
        <br>
          {
        <br>
                 struct svm_validate_context *ctx;
        <br>
                 unsigned long start, end, addr;
        <br>
        @@ -1729,7 +1728,7 @@ static int
        svm_range_validate_and_map(struct mm_struct *mm,
        <br>
        <br>
                         if (!r)
        <br>
                                 r = svm_range_map_to_gpus(prange,
        offset, npages, readonly,
        <br>
        -                                                
        ctx->bitmap, wait, flush_tlb);
        <br>
        +                                                
        ctx->bitmap, flush_tlb);
        <br>
        <br>
                         if (!r && next == end)
        <br>
                                 prange->mapped_to_gpu = true;
        <br>
        @@ -1823,7 +1822,7 @@ static void svm_range_restore_work(struct
        work_struct *work)
        <br>
                         mutex_lock(&prange->migrate_mutex);
        <br>
        <br>
                         r = svm_range_validate_and_map(mm, prange,
        MAX_GPU_INSTANCE,
        <br>
        -                                              false, true,
        false);
        <br>
        +                                              false, false);
        <br>
                         if (r)
        <br>
                                 pr_debug("failed %d to map 0x%lx to
        gpus\n", r,
        <br>
                                          prange->start);
        <br>
        @@ -3064,7 +3063,7 @@ svm_range_restore_pages(struct
        amdgpu_device *adev, unsigned int pasid,
        <br>
                         }
        <br>
                 }
        <br>
        <br>
        -       r = svm_range_validate_and_map(mm, prange, gpuidx,
        false, false, false);
        <br>
        +       r = svm_range_validate_and_map(mm, prange, gpuidx,
        false, false);
        <br>
                 if (r)
        <br>
                         pr_debug("failed %d to map svms 0x%p [0x%lx
        0x%lx] to gpus\n",
        <br>
                                  r, svms, prange->start,
        prange->last);
        <br>
        @@ -3603,7 +3602,7 @@ svm_range_set_attr(struct kfd_process *p,
        struct mm_struct *mm,
        <br>
                         flush_tlb = !migrated && update_mapping
        && prange->mapped_to_gpu;
        <br>
        <br>
                         r = svm_range_validate_and_map(mm, prange,
        MAX_GPU_INSTANCE,
        <br>
        -                                              true, true,
        flush_tlb);
        <br>
        +                                              true, flush_tlb);
        <br>
                         if (r)
        <br>
                                 pr_debug("failed %d to map svm
        range\n", r);
        <br>
        <br>
        --
        <br>
        2.35.1
        <br>
        <br>
      </blockquote>
    </blockquote>
  </body>
</html>