[RFC PATCH] mesa: Export BOs in RW mode

Boris Brezillon boris.brezillon at collabora.com
Wed Jul 3 14:33:51 UTC 2019


On Wed, 3 Jul 2019 15:13:25 +0100
Steven Price <steven.price at arm.com> wrote:

> On 03/07/2019 14:56, Boris Brezillon wrote:
> > On Wed, 3 Jul 2019 07:45:32 -0600
> > Rob Herring <robh+dt at kernel.org> wrote:
> >   
> >> On Wed, Jul 3, 2019 at 7:34 AM Boris Brezillon
> >> <boris.brezillon at collabora.com> wrote:  
> >>>
> >>> Exported BOs might be imported back, then mmap()-ed to be written
> >>> too. Most drivers handle that by mmap()-ing the GEM handle after it's
> >>> been imported, but, according to [1], this is illegal.    
> >>
> >> It's not illegal, but is supposed to go thru the dmabuf mmap
> >> functions.  
> > 
> > That's basically what I'm proposing here, just didn't post the patch
> > skipping the GET_OFFSET step and doing the mmap() on the dmabuf FD
> > instead of the DRM-node one, but I have it working for panfrost.  
> 
> If we want to we could make the Panfrost kernel driver internally call
> dma_buf_mmap() so that mapping using the DRM-node "just works". This is
> indeed what the kbase driver does.

Well, userspace should at least skip DRM_IOCTL_PANFROST_MMAP_BO (or
ignore its return code), so calling mmap() on the dmabuf FD instead of
the DRM-node FD shouldn't be that hard.

> 
> >> However, none of the driver I've looked at (etnaviv, msm,
> >> v3d, vgem) do that. It probably works because it's the same driver
> >> doing the import and export or both drivers have essentially the same
> >> implementations.  
> > 
> > Yes, but maybe that's something we should start fixing if mmap()-ing
> > the dmabuf is the recommended solution.  
> 
> I'm open to options here. User space calling mmap() on the dma_buf file
> descriptor should always be safe (the exporter can do whatever is
> necessary to make it work). If that happens then the patches I posted
> close off the DRM node version which could be broken if the exporter
> needs to do anything to prepare the buffer for CPU access (i.e. cache
> maintenance).

Talking about CPU <-> GPU syncs, I was wondering if the
mmap(gem_handle) step was providing any guarantee that would
allow us to ignore all the cache maintenance operations that are
required when mmap()-ing a dmabuf directly. Note that in both cases the
dmabuf is imported.

> 
> Alternatively if user space wants/needs to use the DMA node then we can
> take a look at what needs to change in the kernel. From a quick look at
> the code it seems we'd need to split drm_gem_mmap() into a helper so
> that it can return whether the exporter is handling everything or if the
> caller needs to do some more work (e.g. drm_gem_shmem_mmap() needs to
> allocate backing pages). But because drm_gem_mmap() is used as the
> direct callback for some drivers we'd need to preserve the interface.
> 
> The below (completely untested) patch demonstrates the idea.
> 
> Steve
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index a8c4468f03d9..df661e24cadf 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1140,7 +1140,7 @@ EXPORT_SYMBOL(drm_gem_mmap_obj);
>   * If the caller is not granted access to the buffer object, the mmap
> will fail
>   * with EACCES. Please see the vma manager for more information.
>   */
> -int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +int drm_gem_mmap_helper(struct file *filp, struct vm_area_struct *vma)
>  {
>  	struct drm_file *priv = filp->private_data;
>  	struct drm_device *dev = priv->minor->dev;
> @@ -1189,6 +1189,11 @@ int drm_gem_mmap(struct file *filp, struct
> vm_area_struct *vma)
>  		vma->vm_flags &= ~VM_MAYWRITE;
>  	}
> 
> +	if (obj->import_attach) {
> +		ret = dma_buf_mmap(obj->dma_buf, vma, 0);
> +		return ret?:1;
> +	}
> +
>  	ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT,
>  			       vma);
> 
> @@ -1196,6 +1201,16 @@ int drm_gem_mmap(struct file *filp, struct
> vm_area_struct *vma)
> 
>  	return ret;
>  }
> +
> +int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> +	int ret;
> +
> +	ret = drm_gem_mmap_helper(filp, vma);
> +	if (ret == 1)
> +		return 0;
> +	return ret;
> +}
>  EXPORT_SYMBOL(drm_gem_mmap);
> 
>  void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 472ea5d81f82..b85d84e4d4a8 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -466,8 +466,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct
> vm_area_struct *vma)
>  	struct drm_gem_shmem_object *shmem;
>  	int ret;
> 
> -	ret = drm_gem_mmap(filp, vma);
> -	if (ret)
> +	ret = drm_gem_mmap_helper(filp, vma);
> +	if (ret == 1)
> +		return 0; /* Exporter handles the mapping */
> +	else if (ret)
>  		return ret;
> 
>  	shmem = to_drm_gem_shmem_obj(vma->vm_private_data);



More information about the dri-devel mailing list