[PATCH 1/2] drm/shmem: Use cached mappings by default
Daniel Vetter
daniel at ffwll.ch
Thu May 14 12:40:50 UTC 2020
On Wed, May 13, 2020 at 05:03:11PM +0200, Thomas Zimmermann wrote:
> SHMEM-buffer backing storage is allocated from system memory; which is
> typically cachable. Currently, only virtio uses cachable mappings; udl
> uses its own vmap/mmap implementation for cachable mappings. Other
> drivers default to writecombine mappings.
I'm pretty sure this breaks all these drivers. quick grep on a few
functions says this is used by lima, panfrost, v3d. And they definitely
need uncached/wc stuff afaiui. Or I'm completely missing something?
-Daniel
>
> Use cached mappings by default. The exception is pages imported via
> dma-buf. DMA memory is usually not cached.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ++++--
> drivers/gpu/drm/virtio/virtgpu_object.c | 1 -
> include/drm/drm_gem_shmem_helper.h | 4 ++--
> 3 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index df31e5782eed1..1ce90325dfa31 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -259,7 +259,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> - if (!shmem->map_cached)
> + if (shmem->map_wc)
> prot = pgprot_writecombine(prot);
> shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> VM_MAP, prot);
> @@ -546,7 +546,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>
> vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> - if (!shmem->map_cached)
> + if (shmem->map_wc)
> vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> vma->vm_ops = &drm_gem_shmem_vm_ops;
>
> @@ -664,6 +664,8 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
> if (IS_ERR(shmem))
> return ERR_CAST(shmem);
>
> + shmem->map_wc = false; /* dma-buf mappings use writecombine */
> +
> shmem->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> if (!shmem->pages) {
> ret = -ENOMEM;
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 6ccbd01cd888c..80ba6b2b61668 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -132,7 +132,6 @@ struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
>
> dshmem = &shmem->base.base;
> dshmem->base.funcs = &virtio_gpu_shmem_funcs;
> - dshmem->map_cached = true;
> return &dshmem->base;
> }
>
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 294b2931c4cc0..a5bc082a77c48 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -98,9 +98,9 @@ struct drm_gem_shmem_object {
> unsigned int vmap_use_count;
>
> /**
> - * @map_cached: map object cached (instead of using writecombine).
> + * @map_wc: map object using writecombine (instead of cached).
> */
> - bool map_cached;
> + bool map_wc;
> };
>
> #define to_drm_gem_shmem_obj(obj) \
> --
> 2.26.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list