<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    Am 05.11.24 um 19:35 schrieb Matthew Brost:<br>
    <blockquote type="cite" cite="mid:ZyplgNBaZ93UUdxY@lstrano-desk.jf.intel.com">[SNIP]<span style="white-space: pre-wrap">
</span>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">Well we spend quite some time removing single page mappings from device
drivers.

The only remaining use case of ttm_bo_kmap() with just one page is the
ttm_bo_vm_access_kmap() function and I was really hoping to make that one
TTM internal at some point.

</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
This is still static, right? I suppose this exposes this to the outside
world though in another place. I asume there is a reason we can't use
vmap in ttm_bo_vm_access?</pre>
    </blockquote>
    <br>
    Well no, the point is we don't want to.<br>
    <br>
    There is a huge push from upstream to avoid using kmap/vmap if
    possible.<br>
    <br>
    <span style="white-space: pre-wrap">
</span>
    <blockquote type="cite" cite="mid:ZyplgNBaZ93UUdxY@lstrano-desk.jf.intel.com">
      <blockquote type="cite">
        <blockquote type="cite">
          <blockquote type="cite">
            <pre class="moz-quote-pre" wrap="">You need a really good justification to bring that back.

</pre>
          </blockquote>
          <pre class="moz-quote-pre" wrap="">The use case is EuDebugger requires essentially the same functionality
as ptrace -> vm_access.
</pre>
        </blockquote>
        <pre class="moz-quote-pre" wrap="">
Then why don't you use ptrace in the first place?

</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
I think the debugger speaks in GPU address space thus needs to access
via the GPU VM -> BO, userptrs.</pre>
    </blockquote>
    <br>
    Exactly that is strictly forbidden. You can't access userptrs
    through this.<br>
    <br>
    That's one of the major reasons why upstream has pushed back on
    using kmap so massively.<br>
    <br>
    Can you fully describe your use case? In other words what exactly is
    your debugger trying to do?<br>
    <br>
    <span style="white-space: pre-wrap">
</span>
    <blockquote type="cite" cite="mid:ZyplgNBaZ93UUdxY@lstrano-desk.jf.intel.com">
      <blockquote type="cite">
        <blockquote type="cite">
          <pre class="moz-quote-pre" wrap="">TTM mapping non-contiguous VRAM doesn't work unless I'm blind. User BOs
which the EuDebugger accesses can be non-contiguous, hence the new
helper.
</pre>
        </blockquote>
        <pre class="moz-quote-pre" wrap="">
Then why don't you handle that inside the driver in the first place instead
of going through a TTM midlayer?

</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
Well common code always seems like a good idea to me. Can do this if you
insist though.

What if I change my new helper ttm_bo_access to be based on vmap for
SYSTEM / TT but honestly that seems wasteful too for a temporary
access mapping.</pre>
    </blockquote>
    <br>
    Well, I think we need to take a step back. The major question is
    what is your use case and is that use case valid or causes security
    concerns.<br>
    <br>
    For example userptrs are imported anonymous pages the GPU has a DMA
    mapping for. Re-mapping them into an user address space for
    debugging or even accessing them through the ptrace interface is
    strictly forbidden.<br>
    <br>
    We already had people trying to do exactly that and it ended not
    well at all.<br>
    <br>
    Regards,<br>
    Christian.<br>
    <br>
    <blockquote type="cite" cite="mid:ZyplgNBaZ93UUdxY@lstrano-desk.jf.intel.com">
      <pre class="moz-quote-pre" wrap="">

With this, I strongly prefer the code as is.

Matt

</pre>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">Regards,
Christian.

</pre>
        <blockquote type="cite">
          <pre class="moz-quote-pre" wrap="">
Matt

</pre>
          <blockquote type="cite">
            <pre class="moz-quote-pre" wrap="">Regards,
Christian.

</pre>
            <blockquote type="cite">
              <blockquote type="cite">
                <pre class="moz-quote-pre" wrap="">Matt

</pre>
                <blockquote type="cite">
                  <pre class="moz-quote-pre" wrap="">Reported-by: Christoph Manszewski<a class="moz-txt-link-rfc2396E" href="mailto:christoph.manszewski@intel.com"><christoph.manszewski@intel.com></a>
Suggested-by: Thomas Hellström<a class="moz-txt-link-rfc2396E" href="mailto:thomas.hellstrom@linux.intel.com"><thomas.hellstrom@linux.intel.com></a>
Signed-off-by: Matthew Brost<a class="moz-txt-link-rfc2396E" href="mailto:matthew.brost@intel.com"><matthew.brost@intel.com></a>
Tested-by: Mika Kuoppala<a class="moz-txt-link-rfc2396E" href="mailto:mika.kuoppala@linux.intel.com"><mika.kuoppala@linux.intel.com></a>
Reviewed-by: Matthew Auld<a class="moz-txt-link-rfc2396E" href="mailto:matthew.auld@intel.com"><matthew.auld@intel.com></a>
---
   drivers/gpu/drm/ttm/ttm_bo_util.c | 86 +++++++++++++++++++++++++++++++
   drivers/gpu/drm/ttm/ttm_bo_vm.c   | 65 +----------------------
   include/drm/ttm/ttm_bo.h          |  2 +
   3 files changed, 89 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index d939925efa81..77e760ea7193 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -919,3 +919,89 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
        return progress;
   }
+
+static int ttm_bo_access_kmap(struct ttm_buffer_object *bo,
+                             unsigned long offset,
+                             void *buf, int len, int write)
+{
+       unsigned long page = offset >> PAGE_SHIFT;
+       unsigned long bytes_left = len;
+       int ret;
+
+       /* Copy a page at a time, that way no extra virtual address
+        * mapping is needed
+        */
+       offset -= page << PAGE_SHIFT;
+       do {
+               unsigned long bytes = min(bytes_left, PAGE_SIZE - offset);
+               struct ttm_bo_kmap_obj map;
+               void *ptr;
+               bool is_iomem;
+
+               ret = ttm_bo_kmap(bo, page, 1, &map);
+               if (ret)
+                       return ret;
+
+               ptr = (void *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset;
+               WARN_ON_ONCE(is_iomem);
+               if (write)
+                       memcpy(ptr, buf, bytes);
+               else
+                       memcpy(buf, ptr, bytes);
+               ttm_bo_kunmap(&map);
+
+               page++;
+               buf += bytes;
+               bytes_left -= bytes;
+               offset = 0;
+       } while (bytes_left);
+
+       return len;
+}
+
+/**
+ * ttm_bo_access - Helper to access a buffer object
+ *
+ * @bo: ttm buffer object
+ * @offset: access offset into buffer object
+ * @buf: pointer to caller memory to read into or write from
+ * @len: length of access
+ * @write: write access
+ *
+ * Utility function to access a buffer object. Useful when buffer object cannot
+ * be easily mapped (non-contiguous, non-visible, etc...).
+ *
+ * Returns:
+ * @len if successful, negative error code on failure.
+ */
+int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
+                 void *buf, int len, int write)
+{
+       int ret;
+
+       if (len < 1 || (offset + len) > bo->base.size)
+               return -EIO;
+
+       ret = ttm_bo_reserve(bo, true, false, NULL);
+       if (ret)
+               return ret;
+
+       switch (bo->resource->mem_type) {
+       case TTM_PL_SYSTEM:
+               fallthrough;
+       case TTM_PL_TT:
+               ret = ttm_bo_access_kmap(bo, offset, buf, len, write);
+               break;
+       default:
+               if (bo->bdev->funcs->access_memory)
+                       ret = bo->bdev->funcs->access_memory
+                               (bo, offset, buf, len, write);
+               else
+                       ret = -EIO;
+       }
+
+       ttm_bo_unreserve(bo);
+
+       return ret;
+}
+EXPORT_SYMBOL(ttm_bo_access);
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 2c699ed1963a..20b1e5f78684 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -366,45 +366,6 @@ void ttm_bo_vm_close(struct vm_area_struct *vma)
   }
   EXPORT_SYMBOL(ttm_bo_vm_close);
-static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
-                                unsigned long offset,
-                                uint8_t *buf, int len, int write)
-{
-       unsigned long page = offset >> PAGE_SHIFT;
-       unsigned long bytes_left = len;
-       int ret;
-
-       /* Copy a page at a time, that way no extra virtual address
-        * mapping is needed
-        */
-       offset -= page << PAGE_SHIFT;
-       do {
-               unsigned long bytes = min(bytes_left, PAGE_SIZE - offset);
-               struct ttm_bo_kmap_obj map;
-               void *ptr;
-               bool is_iomem;
-
-               ret = ttm_bo_kmap(bo, page, 1, &map);
-               if (ret)
-                       return ret;
-
-               ptr = (uint8_t *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset;
-               WARN_ON_ONCE(is_iomem);
-               if (write)
-                       memcpy(ptr, buf, bytes);
-               else
-                       memcpy(buf, ptr, bytes);
-               ttm_bo_kunmap(&map);
-
-               page++;
-               buf += bytes;
-               bytes_left -= bytes;
-               offset = 0;
-       } while (bytes_left);
-
-       return len;
-}
-
   int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
                     void *buf, int len, int write)
   {
@@ -412,32 +373,8 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
        unsigned long offset = (addr) - vma->vm_start +
                ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
                 << PAGE_SHIFT);
-       int ret;
-
-       if (len < 1 || (offset + len) > bo->base.size)
-               return -EIO;
-       ret = ttm_bo_reserve(bo, true, false, NULL);
-       if (ret)
-               return ret;
-
-       switch (bo->resource->mem_type) {
-       case TTM_PL_SYSTEM:
-               fallthrough;
-       case TTM_PL_TT:
-               ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
-               break;
-       default:
-               if (bo->bdev->funcs->access_memory)
-                       ret = bo->bdev->funcs->access_memory(
-                               bo, offset, buf, len, write);
-               else
-                       ret = -EIO;
-       }
-
-       ttm_bo_unreserve(bo);
-
-       return ret;
+       return ttm_bo_access(bo, offset, buf, len, write);
   }
   EXPORT_SYMBOL(ttm_bo_vm_access);
diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
index 5804408815be..8ea11cd8df39 100644
--- a/include/drm/ttm/ttm_bo.h
+++ b/include/drm/ttm/ttm_bo.h
@@ -421,6 +421,8 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo);
   int ttm_bo_evict_first(struct ttm_device *bdev,
                       struct ttm_resource_manager *man,
                       struct ttm_operation_ctx *ctx);
+int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
+                 void *buf, int len, int write);
   vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
                             struct vm_fault *vmf);
   vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
-- 
2.34.1

</pre>
                </blockquote>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
      </blockquote>
    </blockquote>
    <br>
  </body>
</html>