[PATCH v6 02/22] drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj()
Thomas Hellström (Intel)
thomas_os at shipmail.org
Wed Jun 29 06:40:17 UTC 2022
On 5/27/22 01:50, Dmitry Osipenko wrote:
> Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't
> handle imported dma-bufs properly, which results in mapping of something
> else than the imported dma-buf. For example, on NVIDIA Tegra we get a hard
> lockup when userspace writes to the memory mapping of a dma-buf that was
> imported into Tegra's DRM GEM.
>
> To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj().
> Now mmaping of imported dma-bufs works properly for all DRM drivers.
Same comment about Fixes: as in patch 1,
>
> Cc: stable at vger.kernel.org
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko at collabora.com>
> ---
> drivers/gpu/drm/drm_gem.c | 3 +++
> drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ---------
> drivers/gpu/drm/tegra/gem.c | 4 ++++
> 3 files changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 86d670c71286..7c0b025508e4 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
> if (obj_size < vma->vm_end - vma->vm_start)
> return -EINVAL;
>
> + if (obj->import_attach)
> + return dma_buf_mmap(obj->dma_buf, vma, 0);
If we start enabling mmaping of imported dma-bufs on a majority of
drivers in this way, how do we ensure that user-space is not blindly
using the object mmap without calling the needed DMA_BUF_IOCTL_SYNC
which is needed before and after cpu access of mmap'ed dma-bufs?
I was under the impression (admittedly without looking) that the few
drivers that actually called into dma_buf_mmap() had some private
user-mode driver code in place that ensured this happened.
/Thomas
> +
> /* Take a ref for this mapping of the object, so that the fault
> * handler can dereference the mmap offset's pointer to the object.
> * This reference is cleaned up by the corresponding vm_close
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 8ad0e02991ca..6190f5018986 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -609,17 +609,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> */
> int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
> {
> - struct drm_gem_object *obj = &shmem->base;
> int ret;
>
> - if (obj->import_attach) {
> - /* Drop the reference drm_gem_mmap_obj() acquired.*/
> - drm_gem_object_put(obj);
> - vma->vm_private_data = NULL;
> -
> - return dma_buf_mmap(obj->dma_buf, vma, 0);
> - }
> -
> ret = drm_gem_shmem_get_pages(shmem);
> if (ret) {
> drm_gem_vm_close(vma);
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 7c7dd84e6db8..f92aa20d63bb 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -564,6 +564,10 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
> {
> struct tegra_bo *bo = to_tegra_bo(gem);
>
> + /* imported dmu-buf is mapped by drm_gem_mmap_obj() */
> + if (gem->import_attach)
> + return 0;
> +
> if (!bo->pages) {
> unsigned long vm_pgoff = vma->vm_pgoff;
> int err;
More information about the amd-gfx
mailing list