[PATCH] drm/gem: add functions to get/put pages
Rob Clark
rob.clark at linaro.org
Mon Sep 26 11:18:40 PDT 2011
On Thu, Sep 15, 2011 at 5:47 PM, Rob Clark <rob.clark at linaro.org> wrote:
> +/**
> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
> + * @obj: obj in question
> + * @gfpmask: gfp mask of requested pages
> + */
> +struct page ** drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
> +{
Hmm, in working through tiled buffer support over the weekend, I think
I've hit a case where I want to decouple the physical size (in terms
of pages) from virtual size.. which means I don't want to rely on the
same obj->size value for mmap offset creation as for determining # of
pages to allocate.
Since the patch for drm_gem_{get,put}_pages() doesn't seem to be on
drm-core-next yet, I think the more straightforward thing to do is add
a size (or numpages) arg to the get/put_pages functions resubmit this
patch..
BR,
-R
> + struct inode *inode;
> + struct address_space *mapping;
> + struct page *p, **pages;
> + int i, npages;
> +
> + /* This is the shared memory object that backs the GEM resource */
> + inode = obj->filp->f_path.dentry->d_inode;
> + mapping = inode->i_mapping;
> +
> + npages = obj->size >> PAGE_SHIFT;
> +
> + pages = drm_malloc_ab(npages, sizeof(struct page *));
> + if (pages == NULL)
> + return ERR_PTR(-ENOMEM);
> +
> + gfpmask |= mapping_gfp_mask(mapping);
> +
> + for (i = 0; i < npages; i++) {
> + p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
> + if (IS_ERR(p))
> + goto fail;
> + pages[i] = p;
> +
> + /* There is a hypothetical issue w/ drivers that require
> + * buffer memory in the low 4GB.. if the pages are un-
> + * pinned, and swapped out, they can end up swapped back
> + * in above 4GB. If pages are already in memory, then
> + * shmem_read_mapping_page_gfp will ignore the gfpmask,
> + * even if the already in-memory page disobeys the mask.
> + *
> + * It is only a theoretical issue today, because none of
> + * the devices with this limitation can be populated with
> + * enough memory to trigger the issue. But this BUG_ON()
> + * is here as a reminder in case the problem with
> + * shmem_read_mapping_page_gfp() isn't solved by the time
> + * it does become a real issue.
> + *
> + * See this thread: http://lkml.org/lkml/2011/7/11/238
> + */
> + BUG_ON((gfpmask & __GFP_DMA32) &&
> + (page_to_pfn(p) >= 0x00100000UL));
> + }
> +
> + return pages;
> +
> +fail:
> + while (i--) {
> + page_cache_release(pages[i]);
> + }
> + drm_free_large(pages);
> + return ERR_PTR(PTR_ERR(p));
> +}
> +EXPORT_SYMBOL(drm_gem_get_pages);
> +
> +/**
> + * drm_gem_put_pages - helper to free backing pages for a GEM object
> + * @obj: obj in question
> + * @pages: pages to free
> + */
> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> + bool dirty, bool accessed)
> +{
> + int i, npages;
> +
> + npages = obj->size >> PAGE_SHIFT;
> +
> + for (i = 0; i < npages; i++) {
> + if (dirty)
> + set_page_dirty(pages[i]);
> +
> + if (accessed)
> + mark_page_accessed(pages[i]);
> +
> + /* Undo the reference we took when populating the table */
> + page_cache_release(pages[i]);
> + }
> +
> + drm_free_large(pages);
> +}
> +EXPORT_SYMBOL(drm_gem_put_pages);
>
> /**
> * drm_gem_free_mmap_offset - release a fake mmap offset for an object
More information about the dri-devel
mailing list