[Intel-gfx] [PATCH 3/4] drm/i915: Emit pipelined fence changes

Chris Wilson chris at chris-wilson.co.uk
Thu Jul 6 11:35:00 UTC 2017


Quoting Tvrtko Ursulin (2017-07-05 11:19:43)
> 
> On 03/07/2017 11:14, Chris Wilson wrote:
> > Many years ago, long before requests, we tried doing this. We never
> > quite got it right, but now with requests we have the tracking to do the
> > job properly!
> 
> Add a few words on the benefits in certain use cases/benchmarks.

A few percent in padman on Pineview. One presumes that it would have a
better impact on minimal framerates than average but I didn't check.

> > One of the stall points for gen2/gen3 is the use of fence registers for
> > GPU operations. There are only a few available, and currently if we
> > exhaust the available fence register we must stall the GPU between
> > batches. By pipelining the fence register writes, we can avoid the stall
> > and continuously emit the batches. The challenge is remembering to wait
> > for those pipelined LRI before accessing the fence with the CPU, and
> > that is what our request tracking makes easy.
> > 
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/i915_gem.c            |   1 +
> >   drivers/gpu/drm/i915/i915_gem_execbuffer.c |  15 ++-
> >   drivers/gpu/drm/i915/i915_gem_fence_reg.c  | 193 +++++++++++++++++++++++++----
> >   drivers/gpu/drm/i915/i915_gem_fence_reg.h  |   1 +
> >   drivers/gpu/drm/i915/i915_vma.c            |   1 +
> >   drivers/gpu/drm/i915/i915_vma.h            |   4 +
> >   6 files changed, 186 insertions(+), 29 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 939a299260e9..5318e321ce50 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -4880,6 +4880,7 @@ i915_gem_load_init_fences(struct drm_i915_private *dev_priv)
> >               fence->i915 = dev_priv;
> >               fence->id = i;
> >               list_add_tail(&fence->link, &dev_priv->mm.fence_list);
> > +             init_request_active(&fence->pipelined, NULL);
> >       }
> >       i915_gem_restore_fences(dev_priv);
> >   
> > diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > index 22a9f5358322..19347ad0e7ac 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
> > @@ -365,11 +365,12 @@ eb_pin_vma(struct i915_execbuffer *eb,
> >               return;
> >   
> >       if (unlikely(entry->flags & EXEC_OBJECT_NEEDS_FENCE)) {
> > -             if (unlikely(i915_vma_pin_fence(vma))) {
> > +             if (unlikely(i915_vma_reserve_fence(vma))) {
> >                       i915_vma_unpin(vma);
> >                       return;
> >               }
> >   
> > +             entry->flags &= ~EXEC_OBJECT_ASYNC;
> >               if (vma->fence)
> >                       entry->flags |= __EXEC_OBJECT_HAS_FENCE;
> >       }
> > @@ -564,12 +565,13 @@ static int eb_reserve_vma(const struct i915_execbuffer *eb,
> >       GEM_BUG_ON(eb_vma_misplaced(entry, vma));
> >   
> >       if (unlikely(entry->flags & EXEC_OBJECT_NEEDS_FENCE)) {
> > -             err = i915_vma_pin_fence(vma);
> > +             err = i915_vma_reserve_fence(vma);
> >               if (unlikely(err)) {
> >                       i915_vma_unpin(vma);
> >                       return err;
> >               }
> >   
> > +             entry->flags &= ~EXEC_OBJECT_ASYNC;
> 
> A comment here would be good.

We rely on the serialisation between LRI, that the user has no knowledge
over and so we have to ignore any request to skip the required LRI.

> 
> >               if (vma->fence)
> >                       entry->flags |= __EXEC_OBJECT_HAS_FENCE;
> >       }
> > @@ -1848,6 +1850,12 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb)
> >               if (unlikely(obj->cache_dirty && !obj->cache_coherent))
> >                       i915_gem_clflush_object(obj, 0);
> >   
> > +             if (unlikely(entry->flags & EXEC_OBJECT_NEEDS_FENCE)) {
> > +                     err = i915_vma_emit_pipelined_fence(vma, eb->request);
> > +                     if (err)
> > +                             return err;
> > +             }
> > +
> >               err = i915_gem_request_await_object
> >                       (eb->request, obj, entry->flags & EXEC_OBJECT_WRITE);
> >               if (err)
> > @@ -1932,9 +1940,6 @@ void i915_vma_move_to_active(struct i915_vma *vma,
> >               obj->base.read_domains = 0;
> >       }
> >       obj->base.read_domains |= I915_GEM_GPU_DOMAINS;
> > -
> > -     if (flags & EXEC_OBJECT_NEEDS_FENCE)
> > -             i915_gem_active_set(&vma->last_fence, req);
> >   }
> >   
> >   static int i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
> > diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> > index 55ac7bc14fce..d0cd051c19fd 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> > @@ -55,10 +55,9 @@
> >    * CPU ptes into GTT mmaps (not the GTT ptes themselves) as needed.
> >    */
> >   
> > -#define pipelined 0
> > -
> > -static void i965_write_fence_reg(struct drm_i915_fence_reg *fence,
> > -                              struct i915_vma *vma)
> > +static int i965_write_fence_reg(struct drm_i915_fence_reg *fence,
> > +                             struct i915_vma *vma,
> > +                             struct drm_i915_gem_request *pipelined)
> >   {
> >       i915_reg_t fence_reg_lo, fence_reg_hi;
> >       int fence_pitch_shift;
> > @@ -110,11 +109,30 @@ static void i965_write_fence_reg(struct drm_i915_fence_reg *fence,
> >               I915_WRITE(fence_reg_hi, upper_32_bits(val));
> >               I915_WRITE(fence_reg_lo, lower_32_bits(val));
> >               POSTING_READ(fence_reg_lo);
> > +     } else {
> > +             u32 *cs;
> > +
> > +             cs = intel_ring_begin(pipelined, 8);
> 
> Do we need to adjust the ring space reservation amount?

No. We don't reset the registers at the end of a request, but either
emit a new request or serialise with mmio from the CPU.

> > @@ -344,11 +397,16 @@ i915_vma_pin_fence(struct i915_vma *vma)
> >       assert_rpm_wakelock_held(vma->vm->i915);
> >   
> >       /* Just update our place in the LRU if our fence is getting reused. */
> > -     if (vma->fence) {
> > -             fence = vma->fence;
> > +     fence = vma->fence;
> > +     if (fence) {
> >               GEM_BUG_ON(fence->vma != vma);
> >               fence->pin_count++;
> >               if (!fence->dirty) {
> > +                     err = i915_gem_active_retire(&fence->pipelined,
> > +                                                  &fence->i915->drm.struct_mutex);
> > +                     if (err)
> > +                             goto err_unpin;
> 
> Comment explaining the need for this block. There is one CPU wait on the 
> fence_update path already. At least I couldn't easily figure it out. And 
> why also in the !dirty case specifically?

!dirty is a shortcut, but here we only need to wait for the LRI (to set
the fence register) itself and not for the subsequent usage of the fence.

It is strictly an earlier point than the fence->vma->last_fence request.
(Hmm, strictly is a bit of an exaggeration, it does assume single
timeline gen3.)

> > +int i915_vma_reserve_fence(struct i915_vma *vma)
> > +{
> > +     struct drm_i915_fence_reg *fence;
> > +
> > +     lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
> > +     GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
> > +     GEM_BUG_ON(!i915_vma_is_pinned(vma));
> > +
> > +     fence = vma->fence;
> > +     if (!fence) {
> > +             if (!i915_gem_object_is_tiled(vma->obj))
> > +                     return 0;
> > +
> > +             if (!i915_vma_is_map_and_fenceable(vma))
> > +                     return -EINVAL;
> > +
> > +             fence = fence_find(vma->vm->i915);
> > +             if (IS_ERR(fence))
> > +                     return PTR_ERR(fence);
> 
> If all fences are pinned via i915_gem_fault which I thought can happen 
> with the previous patch then here we error out instead of waiting?

i915_gem_fault doesn't pin a page except for the duration it is
inserting the CPU PTE. It is expected that we do hit an error here
(EDEADLK) for exhaustion, a single batch using more than is available.
Currently, atomic kms has a huge regression in that it doesn't
report the fences that it is only temporarily holding.

> > +
> > +     if (!fence->dirty)
> > +             goto out;
> 
> Hm so unless "dirty" no actual fence management will happen. What is the 
> meaning of dirty? Has it changed? Does it needs a write up in the header 
> file?

It means the register no longer matches expectation and we need to
update it before use.
-Chris


More information about the Intel-gfx mailing list