[PATCH 1/8] drm/gem: Write down some rules for vmap usage

Daniel Vetter daniel at ffwll.ch
Mon Nov 30 15:30:48 UTC 2020


On Mon, Nov 30, 2020 at 01:04:26PM +0100, Thomas Zimmermann wrote:
> Mapping a GEM object's buffer into kernel address space prevents the
> buffer from being evicted from VRAM, which in turn may result in
> out-of-memory errors. It's therefore required to only vmap GEM BOs for
> short periods of time; unless the GEM implementation provides additional
> guarantees.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
> ---
>  drivers/gpu/drm/drm_prime.c |  6 ++++++
>  include/drm/drm_gem.h       | 16 ++++++++++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 7db55fce35d8..9c9ece9833e0 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -669,6 +669,12 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
>   * The kernel virtual address is returned in map.
>   *
> + * To prevent the GEM object from being relocated, callers must hold the GEM
> + * object's reservation lock from when calling this function until releasing the
> + * mapping. Holding onto a mapping and the associated reservation lock for an
> + * unbound time may result in out-of-memory errors. Calls to drm_gem_dmabuf_vmap()
> + * should therefore be accompanied by a call to drm_gem_dmabuf_vunmap().
> + *
>   * Returns 0 on success or a negative errno code otherwise.

This is a dma-buf hook, which means just documenting the rules you'd like
to have here isn't enough. We need to roll this out at the dma-buf level,
and enforce it.

Enforce it = assert_lock_held

Roll out = review everyone. Because this goes through dma-buf it'll come
back through shmem helpers (and other helpers and other subsystems) back
to any driver using vmap for gpu buffers. This includes the media
subsystem, and the media subsystem definitely doesn't cope with just
temporarily mapping buffers. So there we need to pin them, which I think
means we'll need 2 version of dma_buf_vmap - one that's temporary and
requires we hold dma_resv lock, the other requires that the buffer is
pinned.

That's what I meant with that this approach here is very sprawling :-/
-Daniel

>   */
>  int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 5e6daa1c982f..7c34cd5ec261 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -137,7 +137,21 @@ struct drm_gem_object_funcs {
>  	 * Returns a virtual address for the buffer. Used by the
>  	 * drm_gem_dmabuf_vmap() helper.
>  	 *
> +	 * Notes to implementors:
> +	 *
> +	 * - Implementations must expect pairs of @vmap and @vunmap to be
> +	 *   called frequently and should optimize for this case.
> +	 *
> +	 * - Implemenations may expect the caller to hold the GEM object's
> +	 *   reservation lock to protect against concurrent calls and relocation
> +	 *   of the GEM object.
> +	 *
> +	 * - Implementations may provide additional guarantees (e.g., working
> +	 *   without holding the reservation lock).
> +	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also drm_gem_dmabuf_vmap()
>  	 */
>  	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> @@ -148,6 +162,8 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_dmabuf_vunmap() helper.
>  	 *
>  	 * This callback is optional.
> +	 *
> +	 * See also @vmap.
>  	 */
>  	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list