<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 2024-01-29 05:06, Christian König
      wrote:<br>
    </div>
    <blockquote type="cite" cite="mid:e128767a-9980-4892-a8bc-9acc206dd84e@amd.com">Am
      26.01.24 um 20:47 schrieb Philip Yang:
      <br>
      <blockquote type="cite">This is to work around a bug in function
        drm_prime_pages_to_sg if length
        <br>
        of nr_pages >= 4GB, by doing the same check for max_segment
        and then
        <br>
        calling sg_alloc_table_from_pages_segment directly instead.
        <br>
        <br>
        This issue shows up on APU because VRAM is allocated as GTT
        memory. It
        <br>
        also fixes >=4GB GTT memory mapping for mGPUs with IOMMU
        isolation mode.
        <br>
      </blockquote>
      <br>
      Well that was talked about before and rejected. If we really want
      more than 4GiB in DMA-bufs we need to fix drm_prime_pages_to_sg()
      instead.
      <br>
    </blockquote>
    <p>I sent a patch to fix drm_prime_pages_to_sg but the patch was
      rejected.</p>
    <p>This issue happens on APU, as VRAM is allocated as GTT memory,
      get to this patch only if IOMMU is isolation mode, with IOMMU off
      or pt mode, multiple GPUs share the same dma mapping.</p>
    <p>Even with the fix patch accepted by drm, we still need this patch
      to workaround the issue on old kernel version.</p>
    <p>Regards,</p>
    <p>Philip  <br>
    </p>
    <blockquote type="cite" cite="mid:e128767a-9980-4892-a8bc-9acc206dd84e@amd.com">
      <br>
      Regards,
      <br>
      Christian.
      <br>
      <br>
      <blockquote type="cite">
        <br>
        Signed-off-by: Philip Yang <a class="moz-txt-link-rfc2396E" href="mailto:Philip.Yang@amd.com"><Philip.Yang@amd.com></a>
        <br>
        ---
        <br>
          drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 50
        ++++++++++++++-------
        <br>
          1 file changed, 34 insertions(+), 16 deletions(-)
        <br>
        <br>
        diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
        b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
        <br>
        index 055ba2ea4c12..a203633fd629 100644
        <br>
        --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
        <br>
        +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
        <br>
        @@ -171,18 +171,41 @@ static struct sg_table
        *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
        <br>
              }
        <br>
                switch (bo->tbo.resource->mem_type) {
        <br>
        -    case TTM_PL_TT:
        <br>
        -        sgt = drm_prime_pages_to_sg(obj->dev,
        <br>
        -                        bo->tbo.ttm->pages,
        <br>
        -                        bo->tbo.ttm->num_pages);
        <br>
        -        if (IS_ERR(sgt))
        <br>
        -            return sgt;
        <br>
        -
        <br>
        -        if (dma_map_sgtable(attach->dev, sgt, dir,
        <br>
        -                    DMA_ATTR_SKIP_CPU_SYNC))
        <br>
        -            goto error_free;
        <br>
        -        break;
        <br>
        +    case TTM_PL_TT: {
        <br>
        +        size_t max_segment = 0;
        <br>
        +        u64 num_pages;
        <br>
        +        int err;
        <br>
        +
        <br>
        +        sgt = kmalloc(sizeof(*sgt), GFP_KERNEL);
        <br>
        +        if (!sgt)
        <br>
        +            return ERR_PTR(-ENOMEM);
        <br>
        +
        <br>
        +        if (obj->dev)
        <br>
        +            max_segment =
        dma_max_mapping_size(obj->dev->dev);
        <br>
        +        if (max_segment == 0)
        <br>
        +            max_segment = UINT_MAX;
        <br>
        +
        <br>
        +        /*
        <br>
        +         * Use u64, otherwise if length of num_pages >= 4GB
        then size
        <br>
        +         * (num_pages << PAGE_SHIFT) becomes 0
        <br>
        +         */
        <br>
        +        num_pages = bo->tbo.ttm->num_pages;
        <br>
        +        err = sg_alloc_table_from_pages_segment(sgt,
        bo->tbo.ttm->pages,
        <br>
        +                            num_pages, 0,
        <br>
        +                            num_pages << PAGE_SHIFT,
        <br>
        +                            max_segment, GFP_KERNEL);
        <br>
        +        if (err) {
        <br>
        +            kfree(sgt);
        <br>
        +            return ERR_PTR(err);
        <br>
        +        }
        <br>
          +        if (dma_map_sgtable(attach->dev, sgt, dir,
        DMA_ATTR_SKIP_CPU_SYNC)) {
        <br>
        +            sg_free_table(sgt);
        <br>
        +            kfree(sgt);
        <br>
        +            return ERR_PTR(-EBUSY);
        <br>
        +        }
        <br>
        +        break;
        <br>
        +    }
        <br>
              case TTM_PL_VRAM:
        <br>
                  r = amdgpu_vram_mgr_alloc_sgt(adev,
        bo->tbo.resource, 0,
        <br>
                                    bo->tbo.base.size,
        attach->dev,
        <br>
        @@ -195,11 +218,6 @@ static struct sg_table
        *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
        <br>
              }
        <br>
                return sgt;
        <br>
        -
        <br>
        -error_free:
        <br>
        -    sg_free_table(sgt);
        <br>
        -    kfree(sgt);
        <br>
        -    return ERR_PTR(-EBUSY);
        <br>
          }
        <br>
            /**
        <br>
      </blockquote>
      <br>
    </blockquote>
  </body>
</html>