[Intel-gfx] [CI] drm/i915: Implement dma_buf_ops->kmap
Chris Wilson
chris at chris-wilson.co.uk
Wed May 3 22:04:12 UTC 2017
On Wed, May 03, 2017 at 09:42:54PM +0100, Chris Wilson wrote:
> On Wed, May 03, 2017 at 09:25:17PM +0100, Chris Wilson wrote:
> > Since kmap allows us to block we can pin the pages and use our normal
> > page lookup routine making the implementation simple, or as some might
> > say quick and dirty.
> >
> > Testcase: igt/drv_selftest/dmabuf
> > Testcase: igt/prime_rw
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > Reviewed-by: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > ---
> > drivers/gpu/drm/i915/i915_gem_dmabuf.c | 24 ++++++
> > drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c | 104 +++++++++++++++++++++++
> > 2 files changed, 128 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> > index f225bf680b6d..6176e589cf09 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
> > @@ -122,12 +122,36 @@ static void i915_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, unsigned long
> > }
> > static void *i915_gem_dmabuf_kmap(struct dma_buf *dma_buf, unsigned long page_num)
> > {
> > + struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
> > + struct page *page;
> > +
> > + if (page_num >= obj->base.size >> PAGE_SHIFT)
> > + return NULL;
> > +
> > + if (!i915_gem_object_has_struct_page(obj))
> > + return NULL;
> > +
> > + if (i915_gem_object_pin_pages(obj))
> > + return NULL;
> > +
> > + /* Synchronisation is left to the caller (via .begin_cpu_access()) */
> > + page = i915_gem_object_get_page(obj, page_num);
>
> Looking at this again, we can call unpin_pages here and rely on the
> page->mapcount to keep the struct page alive until it is unmapped.
>
> As we exclude all objects not backed by struct_page, that should give us
> the reassurance that we won't reuse the backing storage ourselves.
> Though I am thinking about what assurances dmabuf should give that it
> does not disappear whilst it is mapped.
Hmm, it is not page_mapcount for kmap(). kmap() relies on the caller not
discarding the page whilst it is mapped, do we trust the same to apply
to the dmabuf->kmap()? If we keep the object's pages pinned (but not the
object itself), if the dmabuf is freed before the dmabuf->kunmap(), we
will get a cryptic warning in freeing the object, proceed to discard the
struct page anyway, and the kmapped vaddr continues to be scribbled
over. Anyway, looks like v1 is safer, just not foolproof.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
More information about the Intel-gfx
mailing list