[PATCH 2/4] [RFC] drm/exynos: Mapping of gem objects uses dma_mmap_writecombine
Inki Dae
inki.dae at samsung.com
Sun Apr 15 19:21:14 PDT 2012
Hi,
This is also patch set based on old version and the feature below has
already been merged to mainline. Prathyush, please DO WORK patch set based
on latest drm-next for new feature and drm-fixes for bug fixes from next
time and also include me as Maintainer.
Thanks,
Inki Dae.
> -----Original Message-----
> From: Prathyush [mailto:prathyush.k at samsung.com]
> Sent: Saturday, April 14, 2012 8:52 PM
> To: dri-devel at lists.freedesktop.org; linaro-mm-sig at lists.linaro.org
> Cc: inki.dae at samsung.com; subash.rp at samsung.com; prashanth.g at samsung.com;
> sunilm at samsung.com; prathyush.k at samsung.com
> Subject: [PATCH 2/4] [RFC] drm/exynos: Mapping of gem objects uses
> dma_mmap_writecombine
>
> GEM objects get mapped to user space in two ways - DIRECT and
> INDIRECT mapping. DIRECT mapping is by calling an ioctl and it
> maps all the pages to user space by calling remap-pfn-range.
> Indirect mapping is done by calling 'mmap'. The actual mapping
> is done when a page fault gets generated and is handled by
> exynos_drm_gem_fault function where the required page is mapped.
> Both the methods assume contiguous memory.
>
> With this change, the mapping is done by dma_mmap_writecombine
> which will support mapping of non-contiguous memory to user space.
>
> This works similar to the previous approach for the case of
> DIRECT mapping. But in the case of mapping when a page fault
> occurs, dma_mmap_writecombine will map all the pages and not just
> one page.
>
> Signed-off-by: Prathyush K <prathyush.k at samsung.com>
> ---
> drivers/gpu/drm/exynos/exynos_drm_gem.c | 70
++++++++++++++-------------
> ---
> 1 files changed, 33 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index 807143e..a57a83a 100755
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -200,40 +200,27 @@ static int exynos_drm_gem_mmap_buffer(struct file
> *filp,
> {
> struct drm_gem_object *obj = filp->private_data;
> struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
> - struct exynos_drm_gem_buf *buffer;
> - unsigned long pfn, vm_size;
> -
> - DRM_DEBUG_KMS("%s\n", __FILE__);
> + void *kva;
> + dma_addr_t dma_address;
> + unsigned long ret;
>
> - vma->vm_flags |= (VM_IO | VM_RESERVED);
> + kva = exynos_gem_obj->buffer->kvaddr;
>
> - /* in case of direct mapping, always having non-cachable attribute
> */
> - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> - vma->vm_file = filp;
> -
> - vm_size = vma->vm_end - vma->vm_start;
> - /*
> - * a buffer contains information to physically continuous memory
> - * allocated by user request or at framebuffer creation.
> - */
> - buffer = exynos_gem_obj->buffer;
> -
> - /* check if user-requested size is valid. */
> - if (vm_size > buffer->size)
> - return -EINVAL;
> + if (kva == NULL) {
> + DRM_ERROR("No KVA Found\n");
> + return -EAGAIN;
> + }
>
> - /*
> - * get page frame number to physical memory to be mapped
> - * to user space.
> - */
> - pfn = ((unsigned long)exynos_gem_obj->buffer->dma_addr) >>
> PAGE_SHIFT;
> + dma_address = exynos_gem_obj->buffer->dma_addr;
> + vma->vm_flags |= VM_DONTEXPAND | VM_RESERVED;
> + vma->vm_pgoff = 0;
>
> - DRM_DEBUG_KMS("pfn = 0x%lx\n", pfn);
> + ret = dma_mmap_writecombine(obj->dev->dev, vma, kva,
> + dma_address, vma->vm_end - vma->vm_start);
>
> - if (remap_pfn_range(vma, vma->vm_start, pfn, vm_size,
> - vma->vm_page_prot)) {
> - DRM_ERROR("failed to remap pfn range.\n");
> - return -EAGAIN;
> + if (ret) {
> + DRM_ERROR("Remapping memory failed, error: %ld\n", ret);
> + return ret;
> }
>
> return 0;
> @@ -433,19 +420,29 @@ int exynos_drm_gem_fault(struct vm_area_struct *vma,
> struct vm_fault *vmf)
> struct drm_gem_object *obj = vma->vm_private_data;
> struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
> struct drm_device *dev = obj->dev;
> - unsigned long pfn;
> - pgoff_t page_offset;
> + void *kva;
> + dma_addr_t dma_address;
> int ret;
>
> - page_offset = ((unsigned long)vmf->virtual_address -
> - vma->vm_start) >> PAGE_SHIFT;
> + kva = exynos_gem_obj->buffer->kvaddr;
> +
> + if (kva == NULL) {
> + DRM_ERROR("No KVA Found\n");
> + return -EAGAIN;
> + }
>
> mutex_lock(&dev->struct_mutex);
>
> - pfn = (((unsigned long)exynos_gem_obj->buffer->dma_addr) >>
> - PAGE_SHIFT) + page_offset;
>
> - ret = vm_insert_mixed(vma, (unsigned long)vmf->virtual_address,
> pfn);
> + dma_address = exynos_gem_obj->buffer->dma_addr;
> + vma->vm_flags |= VM_DONTEXPAND | VM_RESERVED;
> + vma->vm_pgoff = 0;
> +
> + ret = dma_mmap_writecombine(obj->dev->dev, vma, kva,
> + dma_address, vma->vm_end - vma->vm_start);
> +
> + if (ret)
> + DRM_ERROR("Remapping memory failed, error: %d\n", ret);
>
> mutex_unlock(&dev->struct_mutex);
>
> @@ -457,7 +454,6 @@ int exynos_drm_gem_mmap(struct file *filp, struct
> vm_area_struct *vma)
> int ret;
>
> DRM_DEBUG_KMS("%s\n", __FILE__);
> -
> /* set vm_area_struct. */
> ret = drm_gem_mmap(filp, vma);
> if (ret < 0) {
> --
> 1.7.0.4
More information about the dri-devel
mailing list