[Intel-gfx] [PATCH 5/6] drm/i915: Support for pread/pwrite from/to non shmem backed objects

Ankitprasad Sharma ankitprasad.r.sharma at intel.com
Thu Dec 10 21:22:05 PST 2015


On Thu, 2015-12-10 at 18:18 +0000, Dave Gordon wrote:
> On 10/12/15 11:12, Ankitprasad Sharma wrote:
> > On Wed, 2015-12-09 at 19:39 +0000, Dave Gordon wrote:
> >> On 09/12/15 16:15, Tvrtko Ursulin wrote:
> >>>
> >>> Hi,
> >>>
> >>> On 09/12/15 12:46, ankitprasad.r.sharma at intel.com wrote:
> >>>> From: Ankitprasad Sharma <ankitprasad.r.sharma at intel.com>
> >>>>
> >>>> This patch adds support for extending the pread/pwrite functionality
> >>>> for objects not backed by shmem. The access will be made through
> >>>> gtt interface. This will cover objects backed by stolen memory as well
> >>>> as other non-shmem backed objects.
> >>>>
> >>>> v2: Drop locks around slow_user_access, prefault the pages before
> >>>> access (Chris)
> >>>>
> >>>> v3: Rebased to the latest drm-intel-nightly (Ankit)
> >>>>
> >>>> v4: Moved page base & offset calculations outside the copy loop,
> >>>> corrected data types for size and offset variables, corrected if-else
> >>>> braces format (Tvrtko/kerneldocs)
> >>>>
> >>>> v5: Enabled pread/pwrite for all non-shmem backed objects including
> >>>> without tiling restrictions (Ankit)
> >>>>
> >>>> v6: Using pwrite_fast for non-shmem backed objects as well (Chris)
> >>>>
> >>>> v7: Updated commit message, Renamed i915_gem_gtt_read to
> >>>> i915_gem_gtt_copy,
> >>>> added pwrite slow path for non-shmem backed objects (Chris/Tvrtko)
> >>>>
> >>>> v8: Updated v7 commit message, mutex unlock around pwrite slow path for
> >>>> non-shmem backed objects (Tvrtko)
> >>>>
> >>>> Testcase: igt/gem_stolen
> >>>>
> >>>> Signed-off-by: Ankitprasad Sharma <ankitprasad.r.sharma at intel.com>
> >>>> ---
> >>>>    drivers/gpu/drm/i915/i915_gem.c | 151
> >>>> +++++++++++++++++++++++++++++++++-------
> >>>>    1 file changed, 127 insertions(+), 24 deletions(-)
> >>>>
> >>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c
> >>>> b/drivers/gpu/drm/i915/i915_gem.c
> >>>> index ed97de6..68ed67a 100644
> >>>> --- a/drivers/gpu/drm/i915/i915_gem.c
> >>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
> >>>> @@ -614,6 +614,99 @@ shmem_pread_slow(struct page *page, int
> >>>> shmem_page_offset, int page_length,
> >>>>        return ret ? - EFAULT : 0;
> >>>>    }
> >>>>
> >>>> +static inline uint64_t
> >>>> +slow_user_access(struct io_mapping *mapping,
> >>>> +         uint64_t page_base, int page_offset,
> >>>> +         char __user *user_data,
> >>>> +         int length, bool pwrite)
> >>>> +{
> >>>> +    void __iomem *vaddr_inatomic;
> >>>> +    void *vaddr;
> >>>> +    uint64_t unwritten;
> >>>> +
> >>>> +    vaddr_inatomic = io_mapping_map_wc(mapping, page_base);
> >>>> +    /* We can use the cpu mem copy function because this is X86. */
> >>>> +    vaddr = (void __force *)vaddr_inatomic + page_offset;
> >>>> +    if (pwrite)
> >>>> +        unwritten = __copy_from_user(vaddr, user_data, length);
> >>>> +    else
> >>>> +        unwritten = __copy_to_user(user_data, vaddr, length);
> >>>> +
> >>>> +    io_mapping_unmap(vaddr_inatomic);
> >>>> +    return unwritten;
> >>>> +}
> >>>> +
> >>>> +static int
> >>>> +i915_gem_gtt_copy(struct drm_device *dev,
> >>>> +           struct drm_i915_gem_object *obj, uint64_t size,
> >>>> +           uint64_t data_offset, uint64_t data_ptr)
> >>>> +{
> >>>> +    struct drm_i915_private *dev_priv = dev->dev_private;
> >>>> +    char __user *user_data;
> >>>> +    uint64_t remain;
> >>>> +    uint64_t offset, page_base;
> >>>> +    int page_offset, page_length, ret = 0;
> >>>> +
> >>>> +    ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE);
> >>>> +    if (ret)
> >>>> +        goto out;
> >>>> +
> >>>> +    ret = i915_gem_object_set_to_gtt_domain(obj, false);
> >>>> +    if (ret)
> >>>> +        goto out_unpin;
> >>>> +
> >>>> +    ret = i915_gem_object_put_fence(obj);
> >>>> +    if (ret)
> >>>> +        goto out_unpin;
> >>>> +
> >>>> +    user_data = to_user_ptr(data_ptr);
> >>>> +    remain = size;
> >>>> +    offset = i915_gem_obj_ggtt_offset(obj) + data_offset;
> >>>> +
> >>>> +    mutex_unlock(&dev->struct_mutex);
> >>>> +    if (likely(!i915.prefault_disable))
> >>>> +        ret = fault_in_multipages_writeable(user_data, remain);
> >>>> +
> >>>> +    /*
> >>>> +     * page_offset = offset within page
> >>>> +     * page_base = page offset within aperture
> >>>> +     */
> >>>> +    page_offset = offset_in_page(offset);
> >>>> +    page_base = offset & PAGE_MASK;
> >>>> +
> >>>> +    while (remain > 0) {
> >>>> +        /* page_length = bytes to copy for this page */
> >>>> +        page_length = remain;
> >>>> +        if ((page_offset + remain) > PAGE_SIZE)
> >>>> +            page_length = PAGE_SIZE - page_offset;
> >>>> +
> >>>> +        /* This is a slow read/write as it tries to read from
> >>>> +         * and write to user memory which may result into page
> >>>> +         * faults
> >>>> +         */
> >>>> +        ret = slow_user_access(dev_priv->gtt.mappable, page_base,
> >>>> +                       page_offset, user_data,
> >>>> +                       page_length, false);
> >>>> +
> >>>> +        if (ret) {
> >>>> +            ret = -EFAULT;
> >>>> +            break;
> >>>> +        }
> >>>> +
> >>>> +        remain -= page_length;
> >>>> +        user_data += page_length;
> >>>> +        page_base += page_length;
> >>>> +        page_offset = 0;
> >>>> +    }
> >>>> +
> >>>> +    mutex_lock(&dev->struct_mutex);
> >>>> +
> >>>> +out_unpin:
> >>>> +    i915_gem_object_ggtt_unpin(obj);
> >>>> +out:
> >>>> +    return ret;
> >>>> +}
> >>>> +
> >>>>    static int
> >>>>    i915_gem_shmem_pread(struct drm_device *dev,
> >>>>                 struct drm_i915_gem_object *obj,
> >>>> @@ -737,17 +830,14 @@ i915_gem_pread_ioctl(struct drm_device *dev,
> >>>> void *data,
> >>>>            goto out;
> >>>>        }
> >>>>
> >>>> -    /* prime objects have no backing filp to GEM pread/pwrite
> >>>> -     * pages from.
> >>>> -     */
> >>>> -    if (!obj->base.filp) {
> >>>> -        ret = -EINVAL;
> >>>> -        goto out;
> >>>> -    }
> >>>> -
> >>>>        trace_i915_gem_object_pread(obj, args->offset, args->size);
> >>>>
> >>>> -    ret = i915_gem_shmem_pread(dev, obj, args, file);
> >>>> +    /* pread for non shmem backed objects */
> >>>> +    if (!obj->base.filp && obj->tiling_mode == I915_TILING_NONE)
> >>>> +        ret = i915_gem_gtt_copy(dev, obj, args->size,
> >>>> +                    args->offset, args->data_ptr);
> >>>> +    else
> >>>> +        ret = i915_gem_shmem_pread(dev, obj, args, file);
> >>>
> >>> Hm, it will end up calling i915_gem_shmem_pread for non-shmem backed
> >>> objects if tiling is set. Sounds wrong to me unless I am missing something?
> >>
> >> Which GEM objects have obj->base.filp set? Is it ONLY regular gtt-type
> >> objects? What about (phys, stolen, userptr, dmabuf, ...?) Which of these
> >> is the alternate path going to work with?
> > Only shmem backed objects have obj->base.filp set, filp pointing to the
> > shmem file. For all other non-shmem backed objects (stolen, userptr,
> > dmabuf) we use the alternate path.
> >
> > -Ankit
> 
> But 'phys' objects DO have 'filp' set. Which path is expected to work 
> for them?
Sorry. Yes, phys objects also have filp set. So they won't follow the
alternate path.

> .Dave.
Thanks,
Ankit






More information about the Intel-gfx mailing list