[Intel-gfx] [PATCH 35/39] drm/i915: Pin pages before waiting

Matthew Auld matthew.william.auld at gmail.com
Fri Jun 14 19:53:26 UTC 2019


On Fri, 14 Jun 2019 at 08:11, Chris Wilson <chris at chris-wilson.co.uk> wrote:
>
> In order to allow for asynchronous gathering of pages tracked by the
> obj->resv, we take advantage of pinning the pages before doing waiting
> on the reservation, and where possible do an early wait before acquiring
> the object lock (with a follow up locked waited to ensure we have
> exclusive access where necessary).
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c |   4 +-
>  drivers/gpu/drm/i915/gem/i915_gem_domain.c | 104 +++++++++++----------
>  drivers/gpu/drm/i915/i915_gem.c            |  22 +++--
>  3 files changed, 68 insertions(+), 62 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index a93e233cfaa9..84992d590da5 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -154,7 +154,7 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire
>         bool write = (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE);
>         int err;
>
> -       err = i915_gem_object_pin_pages(obj);
> +       err = i915_gem_object_pin_pages_async(obj);
>         if (err)
>                 return err;
>
> @@ -175,7 +175,7 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct
>         struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
>         int err;
>
> -       err = i915_gem_object_pin_pages(obj);
> +       err = i915_gem_object_pin_pages_async(obj);
>         if (err)
>                 return err;
>
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> index f9044bbdd429..bda990113124 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
> @@ -49,17 +49,11 @@ i915_gem_object_set_to_wc_domain(struct drm_i915_gem_object *obj, bool write)
>
>         assert_object_held(obj);
>
> -       ret = i915_gem_object_wait(obj,
> -                                  I915_WAIT_INTERRUPTIBLE |
> -                                  (write ? I915_WAIT_ALL : 0),
> -                                  MAX_SCHEDULE_TIMEOUT);
> -       if (ret)
> -               return ret;
> -
>         if (obj->write_domain == I915_GEM_DOMAIN_WC)
>                 return 0;
>
> -       /* Flush and acquire obj->pages so that we are coherent through
> +       /*
> +        * Flush and acquire obj->pages so that we are coherent through
>          * direct access in memory with previous cached writes through
>          * shmemfs and that our cache domain tracking remains valid.
>          * For example, if the obj->filp was moved to swap without us
> @@ -67,10 +61,17 @@ i915_gem_object_set_to_wc_domain(struct drm_i915_gem_object *obj, bool write)
>          * continue to assume that the obj remained out of the CPU cached
>          * domain.
>          */
> -       ret = i915_gem_object_pin_pages(obj);
> +       ret = i915_gem_object_pin_pages_async(obj);
>         if (ret)
>                 return ret;
>
> +       ret = i915_gem_object_wait(obj,
> +                                  I915_WAIT_INTERRUPTIBLE |
> +                                  (write ? I915_WAIT_ALL : 0),
> +                                  MAX_SCHEDULE_TIMEOUT);
> +       if (ret)
> +               goto out_unpin;
> +

Do we somehow propagate a potential error from a worker to the
object_wait()? Or should we be looking at obj->mm.pages here?


More information about the Intel-gfx mailing list