[PATCH v5 1/3] drm/shmem: add support for per object caching flags.

Guillaume Gardet Guillaume.Gardet at arm.com
Wed Feb 26 16:51:36 UTC 2020



> -----Original Message-----
> From: Gerd Hoffmann <kraxel at redhat.com>
> Sent: 26 February 2020 16:48
> To: dri-devel at lists.freedesktop.org
> Cc: tzimmermann at suse.de; gurchetansingh at chromium.org; olvaffe at gmail.com;
> Guillaume Gardet <Guillaume.Gardet at arm.com>; Gerd Hoffmann
> <kraxel at redhat.com>; stable at vger.kernel.org; Maarten Lankhorst
> <maarten.lankhorst at linux.intel.com>; Maxime Ripard <mripard at kernel.org>;
> David Airlie <airlied at linux.ie>; Daniel Vetter <daniel at ffwll.ch>; open list <linux-
> kernel at vger.kernel.org>
> Subject: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.
>
> Add map_cached bool to drm_gem_shmem_object, to request cached mappings
> on a per-object base.  Check the flag before adding writecombine to pgprot bits.
>
> Cc: stable at vger.kernel.org
> Signed-off-by: Gerd Hoffmann <kraxel at redhat.com>

Tested-by: Guillaume Gardet <Guillaume.Gardet at arm.com>

> ---
>  include/drm/drm_gem_shmem_helper.h     |  5 +++++
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 15 +++++++++++----
>  2 files changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/include/drm/drm_gem_shmem_helper.h
> b/include/drm/drm_gem_shmem_helper.h
> index e34a7b7f848a..294b2931c4cc 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -96,6 +96,11 @@ struct drm_gem_shmem_object {
>   * The address are un-mapped when the count reaches zero.
>   */
>  unsigned int vmap_use_count;
> +
> +/**
> + * @map_cached: map object cached (instead of using writecombine).
> + */
> +bool map_cached;
>  };
>
>  #define to_drm_gem_shmem_obj(obj) \
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index a421a2eed48a..aad9324dcf4f 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -254,11 +254,16 @@ static void *drm_gem_shmem_vmap_locked(struct
> drm_gem_shmem_object *shmem)
>  if (ret)
>  goto err_zero_use;
>
> -if (obj->import_attach)
> +if (obj->import_attach) {
>  shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> -else
> +} else {
> +pgprot_t prot = PAGE_KERNEL;
> +
> +if (!shmem->map_cached)
> +prot = pgprot_writecombine(prot);
>  shmem->vaddr = vmap(shmem->pages, obj->size >>
> PAGE_SHIFT,
> -    VM_MAP,
> pgprot_writecombine(PAGE_KERNEL));
> +    VM_MAP, prot);
> +}
>
>  if (!shmem->vaddr) {
>  DRM_DEBUG_KMS("Failed to vmap pages\n"); @@ -540,7 +545,9
> @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct
> vm_area_struct *vma)
>  }
>
>  vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> -vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma-
> >vm_flags));
> +vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> +if (!shmem->map_cached)
> +vma->vm_page_prot = pgprot_writecombine(vma-
> >vm_page_prot);
>  vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>  vma->vm_ops = &drm_gem_shmem_vm_ops;
>
> --
> 2.18.2

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.


More information about the dri-devel mailing list