[PATCH v6 2/8] drm/ttm: Add ttm_bo_access
Matthew Brost
matthew.brost at intel.com
Wed Nov 6 17:00:09 UTC 2024
On Wed, Nov 06, 2024 at 04:44:15PM +0100, Christian König wrote:
> Am 06.11.24 um 16:25 schrieb Matthew Brost:
> > [SNIP]
> > > Can you fully describe your use case? In other words what exactly is your
> > > debugger trying to do?
> > See above; I hope I've made this clearer.
>
> It at least sounds a little bit better.
>
> > Also, I'm not really an expert on Eudebug, as I haven't been involved in
> > the development aside from reviewing its interaction with the core of
> > Xe. Any further explanation would likely require me to loop in a
> > colleague.
>
> I think that could help since I don't have a clear picture of your use case.
>
>
> > > Well, I think we need to take a step back. The major question is what is
> > > your use case and is that use case valid or causes security concerns.
> > >
> > > For example userptrs are imported anonymous pages the GPU has a DMA mapping
> > > for. Re-mapping them into an user address space for debugging or even
> > > accessing them through the ptrace interface is strictly forbidden.
> > >
> > > We already had people trying to do exactly that and it ended not well at
> > > all.
> > >
> > Again, if we can focus on what this patch is doing—accessing a BO, not a
> > userptr—I think that will help progress here.
> >
> > To bring things together: "There is a huge push from upstream to avoid
> > using kmap/vmap if possible." How would you suggest accessing a BO then?
>
> Well that's the whole point: You should *not* access the BO on behalves of
> userspace in a peek/poke like interface.
>
This is not a generic interface that anyone can freely access. The same
permissions used by ptrace are checked when opening such an interface.
See [1] [2].
[1] https://patchwork.freedesktop.org/patch/617470/?series=136572&rev=2
[2] https://patchwork.freedesktop.org/patch/617471/?series=136572&rev=2
> > kmap/vmap are used everywhere in the DRM subsystem to access BOs, so I’m
> > failing to see the problem with adding a simple helper based on existing
> > code.
>
> What#s possible and often done is to do kmap/vmap if you need to implement a
> CPU copy for scanout for example or for copying/validating command buffers.
> But that usually requires accessing the whole BO and has separate security
> checks.
>
> When you want to access only a few bytes of a BO that sounds massively like
> a peek/poke like interface and we have already rejected that more than once.
> There even used to be standardized GEM IOCTLs for that which have been
> removed by now.
>
> If you need to access BOs which are placed in not CPU accessible memory then
> implement the access callback for ptrace, see amdgpu_ttm_access_memory for
> an example how to do this.
>
Ptrace access via vm_operations_struct.access → ttm_bo_vm_access.
This series renames ttm_bo_vm_access to ttm_bo_access, with no code changes.
The above function accesses a BO via kmap if it is in SYSTEM / TT,
which is existing code.
This function is only exposed to user space via ptrace permissions.
In this series, we implement a function [3] similar to
amdgpu_ttm_access_memory for the TTM vfunc access_memory. What is
missing is non-visible CPU memory access, similar to
amdgpu_ttm_access_memory_sdma. This will be addressed in a follow-up and
was omitted in this series given its complexity.
So, this looks more or less identical to AMD's ptrace implementation,
but in GPU address space. Again, I fail to see what the problem is here.
What am I missing?
Matt
[3] https://patchwork.freedesktop.org/patch/622520/?series=140200&rev=6
> Regards,
> Christian.
>
> >
> > Matt
> >
> > > Regards,
> > > Christian.
> > >
> > > > With this, I strongly prefer the code as is.
> > > >
> > > > Matt
> > > >
> > > > > Regards,
> > > > > Christian.
> > > > >
> > > > > > Matt
> > > > > >
> > > > > > > Regards,
> > > > > > > Christian.
> > > > > > >
> > > > > > > > > Matt
> > > > > > > > >
> > > > > > > > > > Reported-by: Christoph Manszewski<christoph.manszewski at intel.com>
> > > > > > > > > > Suggested-by: Thomas Hellström<thomas.hellstrom at linux.intel.com>
> > > > > > > > > > Signed-off-by: Matthew Brost<matthew.brost at intel.com>
> > > > > > > > > > Tested-by: Mika Kuoppala<mika.kuoppala at linux.intel.com>
> > > > > > > > > > Reviewed-by: Matthew Auld<matthew.auld at intel.com>
> > > > > > > > > > ---
> > > > > > > > > > drivers/gpu/drm/ttm/ttm_bo_util.c | 86 +++++++++++++++++++++++++++++++
> > > > > > > > > > drivers/gpu/drm/ttm/ttm_bo_vm.c | 65 +----------------------
> > > > > > > > > > include/drm/ttm/ttm_bo.h | 2 +
> > > > > > > > > > 3 files changed, 89 insertions(+), 64 deletions(-)
> > > > > > > > > >
> > > > > > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > > > > > > > > index d939925efa81..77e760ea7193 100644
> > > > > > > > > > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > > > > > > > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > > > > > > > > @@ -919,3 +919,89 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
> > > > > > > > > > return progress;
> > > > > > > > > > }
> > > > > > > > > > +
> > > > > > > > > > +static int ttm_bo_access_kmap(struct ttm_buffer_object *bo,
> > > > > > > > > > + unsigned long offset,
> > > > > > > > > > + void *buf, int len, int write)
> > > > > > > > > > +{
> > > > > > > > > > + unsigned long page = offset >> PAGE_SHIFT;
> > > > > > > > > > + unsigned long bytes_left = len;
> > > > > > > > > > + int ret;
> > > > > > > > > > +
> > > > > > > > > > + /* Copy a page at a time, that way no extra virtual address
> > > > > > > > > > + * mapping is needed
> > > > > > > > > > + */
> > > > > > > > > > + offset -= page << PAGE_SHIFT;
> > > > > > > > > > + do {
> > > > > > > > > > + unsigned long bytes = min(bytes_left, PAGE_SIZE - offset);
> > > > > > > > > > + struct ttm_bo_kmap_obj map;
> > > > > > > > > > + void *ptr;
> > > > > > > > > > + bool is_iomem;
> > > > > > > > > > +
> > > > > > > > > > + ret = ttm_bo_kmap(bo, page, 1, &map);
> > > > > > > > > > + if (ret)
> > > > > > > > > > + return ret;
> > > > > > > > > > +
> > > > > > > > > > + ptr = (void *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset;
> > > > > > > > > > + WARN_ON_ONCE(is_iomem);
> > > > > > > > > > + if (write)
> > > > > > > > > > + memcpy(ptr, buf, bytes);
> > > > > > > > > > + else
> > > > > > > > > > + memcpy(buf, ptr, bytes);
> > > > > > > > > > + ttm_bo_kunmap(&map);
> > > > > > > > > > +
> > > > > > > > > > + page++;
> > > > > > > > > > + buf += bytes;
> > > > > > > > > > + bytes_left -= bytes;
> > > > > > > > > > + offset = 0;
> > > > > > > > > > + } while (bytes_left);
> > > > > > > > > > +
> > > > > > > > > > + return len;
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +/**
> > > > > > > > > > + * ttm_bo_access - Helper to access a buffer object
> > > > > > > > > > + *
> > > > > > > > > > + * @bo: ttm buffer object
> > > > > > > > > > + * @offset: access offset into buffer object
> > > > > > > > > > + * @buf: pointer to caller memory to read into or write from
> > > > > > > > > > + * @len: length of access
> > > > > > > > > > + * @write: write access
> > > > > > > > > > + *
> > > > > > > > > > + * Utility function to access a buffer object. Useful when buffer object cannot
> > > > > > > > > > + * be easily mapped (non-contiguous, non-visible, etc...).
> > > > > > > > > > + *
> > > > > > > > > > + * Returns:
> > > > > > > > > > + * @len if successful, negative error code on failure.
> > > > > > > > > > + */
> > > > > > > > > > +int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
> > > > > > > > > > + void *buf, int len, int write)
> > > > > > > > > > +{
> > > > > > > > > > + int ret;
> > > > > > > > > > +
> > > > > > > > > > + if (len < 1 || (offset + len) > bo->base.size)
> > > > > > > > > > + return -EIO;
> > > > > > > > > > +
> > > > > > > > > > + ret = ttm_bo_reserve(bo, true, false, NULL);
> > > > > > > > > > + if (ret)
> > > > > > > > > > + return ret;
> > > > > > > > > > +
> > > > > > > > > > + switch (bo->resource->mem_type) {
> > > > > > > > > > + case TTM_PL_SYSTEM:
> > > > > > > > > > + fallthrough;
> > > > > > > > > > + case TTM_PL_TT:
> > > > > > > > > > + ret = ttm_bo_access_kmap(bo, offset, buf, len, write);
> > > > > > > > > > + break;
> > > > > > > > > > + default:
> > > > > > > > > > + if (bo->bdev->funcs->access_memory)
> > > > > > > > > > + ret = bo->bdev->funcs->access_memory
> > > > > > > > > > + (bo, offset, buf, len, write);
> > > > > > > > > > + else
> > > > > > > > > > + ret = -EIO;
> > > > > > > > > > + }
> > > > > > > > > > +
> > > > > > > > > > + ttm_bo_unreserve(bo);
> > > > > > > > > > +
> > > > > > > > > > + return ret;
> > > > > > > > > > +}
> > > > > > > > > > +EXPORT_SYMBOL(ttm_bo_access);
> > > > > > > > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> > > > > > > > > > index 2c699ed1963a..20b1e5f78684 100644
> > > > > > > > > > --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> > > > > > > > > > +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> > > > > > > > > > @@ -366,45 +366,6 @@ void ttm_bo_vm_close(struct vm_area_struct *vma)
> > > > > > > > > > }
> > > > > > > > > > EXPORT_SYMBOL(ttm_bo_vm_close);
> > > > > > > > > > -static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
> > > > > > > > > > - unsigned long offset,
> > > > > > > > > > - uint8_t *buf, int len, int write)
> > > > > > > > > > -{
> > > > > > > > > > - unsigned long page = offset >> PAGE_SHIFT;
> > > > > > > > > > - unsigned long bytes_left = len;
> > > > > > > > > > - int ret;
> > > > > > > > > > -
> > > > > > > > > > - /* Copy a page at a time, that way no extra virtual address
> > > > > > > > > > - * mapping is needed
> > > > > > > > > > - */
> > > > > > > > > > - offset -= page << PAGE_SHIFT;
> > > > > > > > > > - do {
> > > > > > > > > > - unsigned long bytes = min(bytes_left, PAGE_SIZE - offset);
> > > > > > > > > > - struct ttm_bo_kmap_obj map;
> > > > > > > > > > - void *ptr;
> > > > > > > > > > - bool is_iomem;
> > > > > > > > > > -
> > > > > > > > > > - ret = ttm_bo_kmap(bo, page, 1, &map);
> > > > > > > > > > - if (ret)
> > > > > > > > > > - return ret;
> > > > > > > > > > -
> > > > > > > > > > - ptr = (uint8_t *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset;
> > > > > > > > > > - WARN_ON_ONCE(is_iomem);
> > > > > > > > > > - if (write)
> > > > > > > > > > - memcpy(ptr, buf, bytes);
> > > > > > > > > > - else
> > > > > > > > > > - memcpy(buf, ptr, bytes);
> > > > > > > > > > - ttm_bo_kunmap(&map);
> > > > > > > > > > -
> > > > > > > > > > - page++;
> > > > > > > > > > - buf += bytes;
> > > > > > > > > > - bytes_left -= bytes;
> > > > > > > > > > - offset = 0;
> > > > > > > > > > - } while (bytes_left);
> > > > > > > > > > -
> > > > > > > > > > - return len;
> > > > > > > > > > -}
> > > > > > > > > > -
> > > > > > > > > > int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> > > > > > > > > > void *buf, int len, int write)
> > > > > > > > > > {
> > > > > > > > > > @@ -412,32 +373,8 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> > > > > > > > > > unsigned long offset = (addr) - vma->vm_start +
> > > > > > > > > > ((vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node))
> > > > > > > > > > << PAGE_SHIFT);
> > > > > > > > > > - int ret;
> > > > > > > > > > -
> > > > > > > > > > - if (len < 1 || (offset + len) > bo->base.size)
> > > > > > > > > > - return -EIO;
> > > > > > > > > > - ret = ttm_bo_reserve(bo, true, false, NULL);
> > > > > > > > > > - if (ret)
> > > > > > > > > > - return ret;
> > > > > > > > > > -
> > > > > > > > > > - switch (bo->resource->mem_type) {
> > > > > > > > > > - case TTM_PL_SYSTEM:
> > > > > > > > > > - fallthrough;
> > > > > > > > > > - case TTM_PL_TT:
> > > > > > > > > > - ret = ttm_bo_vm_access_kmap(bo, offset, buf, len, write);
> > > > > > > > > > - break;
> > > > > > > > > > - default:
> > > > > > > > > > - if (bo->bdev->funcs->access_memory)
> > > > > > > > > > - ret = bo->bdev->funcs->access_memory(
> > > > > > > > > > - bo, offset, buf, len, write);
> > > > > > > > > > - else
> > > > > > > > > > - ret = -EIO;
> > > > > > > > > > - }
> > > > > > > > > > -
> > > > > > > > > > - ttm_bo_unreserve(bo);
> > > > > > > > > > -
> > > > > > > > > > - return ret;
> > > > > > > > > > + return ttm_bo_access(bo, offset, buf, len, write);
> > > > > > > > > > }
> > > > > > > > > > EXPORT_SYMBOL(ttm_bo_vm_access);
> > > > > > > > > > diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
> > > > > > > > > > index 5804408815be..8ea11cd8df39 100644
> > > > > > > > > > --- a/include/drm/ttm/ttm_bo.h
> > > > > > > > > > +++ b/include/drm/ttm/ttm_bo.h
> > > > > > > > > > @@ -421,6 +421,8 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo);
> > > > > > > > > > int ttm_bo_evict_first(struct ttm_device *bdev,
> > > > > > > > > > struct ttm_resource_manager *man,
> > > > > > > > > > struct ttm_operation_ctx *ctx);
> > > > > > > > > > +int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
> > > > > > > > > > + void *buf, int len, int write);
> > > > > > > > > > vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo,
> > > > > > > > > > struct vm_fault *vmf);
> > > > > > > > > > vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
> > > > > > > > > > --
> > > > > > > > > > 2.34.1
> > > > > > > > > >
More information about the dri-devel
mailing list