[PATCH v4 06/13] drm/xe/exec_queue: Prepare last fence for hw engine group resume context
Matthew Brost
matthew.brost at intel.com
Tue Aug 6 14:33:13 UTC 2024
On Tue, Aug 06, 2024 at 11:46:41AM +0200, Francois Dugast wrote:
> On Mon, Aug 05, 2024 at 08:25:19PM +0000, Matthew Brost wrote:
> > On Thu, Aug 01, 2024 at 02:56:47PM +0200, Francois Dugast wrote:
> > > Ensure we can safely take a ref of the exec queue's last fence from the
> > > context of resuming jobs from the hw engine group. The locking requirements
> > > differ from the general case, hence the introduction of
> > > xe_exec_queue_last_fence_{get,put}_for_resume().
> > >
> > > v2: Add kernel doc, rework the code to prevent code duplication
> > >
> > > Signed-off-by: Francois Dugast <francois.dugast at intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_exec_queue.c | 123 ++++++++++++++++++++++++++---
> > > drivers/gpu/drm/xe/xe_exec_queue.h | 3 +
> > > 2 files changed, 113 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index 85da2a30a4a4..a860ba752786 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -815,8 +815,37 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
> > > return 0;
> > > }
> > >
> > > -static void xe_exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
> > > - struct xe_vm *vm)
> > > +/**
> > > + * exec_queue_last_fence_lockdep_assert() - Assert locking to access last fence
> > > + * @q: The exec queue
> > > + * @vm: The VM
> > > + *
> > > + * These are the locking requirements to access the last fence of this exec
> > > + * queue. This is the general and strictest version, to be used by default.
> > > + */
> > > +static void exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
> > > + struct xe_vm *vm)
> > > +{
> > > + if (q->flags & EXEC_QUEUE_FLAG_VM) {
> > > + lockdep_assert_held(&vm->lock);
> > > + } else {
> > > + xe_vm_assert_held(vm);
> > > + lockdep_assert_held(&q->hwe->hw_engine_group->mode_sem);
> >
> > I think this (lockdep_assert_held(&q->hwe->hw_engine_group->mode_sem))
> > is the only change needed here, then just use existing functions as is.
> > Yes, we won't have have lockdep_assert_held_write in this layer for
> > resume but we'd have it in the resume worker so I think we are fine?
> > Does that sounds reasonable?
>
> I was hoping to come up with complete explicit locking requirements here but
> in the end the multiple variants are difficult to read.
>
> Yes your suggestion makes sense for lockdep_assert_held_write, however beyond
> this assert another reason for adding separate *_for_resume variants was to
> skip xe_vm_assert_held(vm) above in the context of calling from the resume
> worker. Or did you mean to also grab the VM lock for each exec queue in the
> resume worker?
>
Yes, you are right, I missed that xe_vm_assert_held(vm) will not be held
in the resume path.
Hmm how about keep the updated exec_queue_last_fence_lockdep_assert with
'lockdep_assert_held(&q->hwe->hw_engine_group->mode_sem);'.
Then still add '_for_resume' functions which don't call
'exec_queue_last_fence_lockdep_assert' but rather have
'lockdep_assert_write_held(&q->hwe->hw_engine_group->mode_sem);'?
Can we make this work without bool arguments? That would be cleaner.
> >
> > Note you will need this patch though [1] to stop kernel binds from
> > blowing up lockdep.
>
> Thanks for this patch.
>
I merged this one btw.
Matt
> Francois
>
> >
> > Matt
> >
> > [1] https://patchwork.freedesktop.org/series/136898/
> >
> > > + }
> > > +}
> > > +
> > > +/**
> > > + * exec_queue_last_fence_lockdep_assert_for_test_dep() - Assert locking only to
> > > + * test the last fence dep
> > > + * @q: The exec queue
> > > + * @vm: The VM
> > > + *
> > > + * This version of exec_queue_last_fence_lockdep_assert() does not require
> > > + * locking of the hw engine group semaphore. It is exclusively meant to be used
> > > + * from the context of xe_exec_queue_last_fence_test_dep().
> > > + */
> > > +static void exec_queue_last_fence_lockdep_assert_for_test_dep(struct xe_exec_queue *q,
> > > + struct xe_vm *vm)
> > > {
> > > if (q->flags & EXEC_QUEUE_FLAG_VM)
> > > lockdep_assert_held(&vm->lock);
> > > @@ -831,7 +860,22 @@ static void xe_exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
> > > */
> > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *q, struct xe_vm *vm)
> > > {
> > > - xe_exec_queue_last_fence_lockdep_assert(q, vm);
> > > + exec_queue_last_fence_lockdep_assert(q, vm);
> > > +
> > > + xe_exec_queue_last_fence_put_unlocked(q);
> > > +}
> > > +
> > > +/**
> > > + * xe_exec_queue_last_fence_put_for_resume() - Drop ref to last fence
> > > + * @q: The exec queue
> > > + * @vm: The VM the engine does a bind or exec for
> > > + *
> > > + * Only safe to be called in the context of resuming the hw engine group's
> > > + * long-running exec queue, when the group semaphore is held.
> > > + */
> > > +void xe_exec_queue_last_fence_put_for_resume(struct xe_exec_queue *q, struct xe_vm *vm)
> > > +{
> > > + lockdep_assert_held_write(&q->hwe->hw_engine_group->mode_sem);
> > >
> > > xe_exec_queue_last_fence_put_unlocked(q);
> > > }
> > > @@ -851,30 +895,82 @@ void xe_exec_queue_last_fence_put_unlocked(struct xe_exec_queue *q)
> > > }
> > >
> > > /**
> > > - * xe_exec_queue_last_fence_get() - Get last fence
> > > + * exec_queue_last_fence_get() - Get last fence
> > > * @q: The exec queue
> > > * @vm: The VM the engine does a bind or exec for
> > > + * @only_for_test_dep: True if context is testing last fence dependency
> > > + * @only_for_resume: True if context is resuming the hw engine group's queues
> > > *
> > > - * Get last fence, takes a ref
> > > + * Get last fence, takes a ref, after checking locking depending on the
> > > + * context.
> > > *
> > > - * Returns: last fence if not signaled, dma fence stub if signaled
> > > + * This function is parameterized to cover various contexts and consequently
> > > + * various locking requirements from which the caller gets the last fence,
> > > + * while also avoiding code duplication.
> > > + *
> > > + * It is intentionally made static to hide those parameters externally and
> > > + * limit possible incorrect uses from the wrong context.
> > > */
> > > -struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
> > > - struct xe_vm *vm)
> > > +static struct dma_fence *exec_queue_last_fence_get(struct xe_exec_queue *q,
> > > + struct xe_vm *vm,
> > > + bool only_for_test_dep,
> > > + bool only_for_resume)
> > > {
> > > struct dma_fence *fence;
> > >
> > > - xe_exec_queue_last_fence_lockdep_assert(q, vm);
> > > + xe_assert(vm->xe, !(only_for_test_dep && only_for_resume));
> > > + if (only_for_resume)
> > > + lockdep_assert_held_write(&q->hwe->hw_engine_group->mode_sem);
> > > + else if (only_for_test_dep)
> > > + exec_queue_last_fence_lockdep_assert_for_test_dep(q, vm);
> > > + else
> > > + exec_queue_last_fence_lockdep_assert(q, vm);
> > >
> > > if (q->last_fence &&
> > > - test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &q->last_fence->flags))
> > > - xe_exec_queue_last_fence_put(q, vm);
> > > + test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &q->last_fence->flags)) {
> > > + if (only_for_resume)
> > > + xe_exec_queue_last_fence_put_for_resume(q, vm);
> > > + else
> > > + xe_exec_queue_last_fence_put(q, vm);
> > > + }
> > >
> > > fence = q->last_fence ? q->last_fence : dma_fence_get_stub();
> > > dma_fence_get(fence);
> > > return fence;
> > > }
> > >
> > > +/**
> > > + * xe_exec_queue_last_fence_get() - Get last fence
> > > + * @q: The exec queue
> > > + * @vm: The VM the engine does a bind or exec for
> > > + *
> > > + * Get last fence, takes a ref
> > > + *
> > > + * Returns: last fence if not signaled, dma fence stub if signaled
> > > + */
> > > +struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
> > > + struct xe_vm *vm)
> > > +{
> > > + return exec_queue_last_fence_get(q, vm, false, false);
> > > +}
> > > +
> > > +/**
> > > + * xe_exec_queue_last_fence_get_to_resume() - Get last fence
> > > + * @q: The exec queue
> > > + * @vm: The VM the engine does a bind or exec for
> > > + *
> > > + * Get last fence, takes a ref. Only safe to be called in the context of
> > > + * resuming the hw engine group's long-running exec queue, when the group
> > > + * semaphore is held.
> > > + *
> > > + * Returns: last fence if not signaled, dma fence stub if signaled
> > > + */
> > > +struct dma_fence *xe_exec_queue_last_fence_get_for_resume(struct xe_exec_queue *q,
> > > + struct xe_vm *vm)
> > > +{
> > > + return exec_queue_last_fence_get(q, vm, false, true);
> > > +}
> > > +
> > > /**
> > > * xe_exec_queue_last_fence_set() - Set last fence
> > > * @q: The exec queue
> > > @@ -887,7 +983,7 @@ struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
> > > void xe_exec_queue_last_fence_set(struct xe_exec_queue *q, struct xe_vm *vm,
> > > struct dma_fence *fence)
> > > {
> > > - xe_exec_queue_last_fence_lockdep_assert(q, vm);
> > > + exec_queue_last_fence_lockdep_assert(q, vm);
> > >
> > > xe_exec_queue_last_fence_put(q, vm);
> > > q->last_fence = dma_fence_get(fence);
> > > @@ -906,7 +1002,8 @@ int xe_exec_queue_last_fence_test_dep(struct xe_exec_queue *q, struct xe_vm *vm)
> > > struct dma_fence *fence;
> > > int err = 0;
> > >
> > > - fence = xe_exec_queue_last_fence_get(q, vm);
> > > + fence = exec_queue_last_fence_get(q, vm, true, false);
> > > +
> > > if (fence) {
> > > err = test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) ?
> > > 0 : -ETIME;
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > index ded77b0f3b90..fc9884a6ce85 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > @@ -71,8 +71,11 @@ enum xe_exec_queue_priority xe_exec_queue_device_get_max_priority(struct xe_devi
> > >
> > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct xe_vm *vm);
> > > void xe_exec_queue_last_fence_put_unlocked(struct xe_exec_queue *e);
> > > +void xe_exec_queue_last_fence_put_for_resume(struct xe_exec_queue *e, struct xe_vm *vm);
> > > struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *e,
> > > struct xe_vm *vm);
> > > +struct dma_fence *xe_exec_queue_last_fence_get_for_resume(struct xe_exec_queue *e,
> > > + struct xe_vm *vm);
> > > void xe_exec_queue_last_fence_set(struct xe_exec_queue *e, struct xe_vm *vm,
> > > struct dma_fence *fence);
> > > int xe_exec_queue_last_fence_test_dep(struct xe_exec_queue *q,
> > > --
> > > 2.43.0
> > >
More information about the Intel-xe
mailing list