[PATCH 1/8] drm/xe: Move explicit CT lock in TLB invalidation sequence

Summers, Stuart stuart.summers at intel.com
Thu Aug 7 18:53:42 UTC 2025


On Thu, 2025-08-07 at 10:40 -0700, Matthew Brost wrote:
> On Thu, Aug 07, 2025 at 11:10:38AM -0600, Summers, Stuart wrote:
> > Mi Matt,
> > 
> > Any thoughts here?
> > 
> 
> A lock to protect the seqno assignment and calling xe_guc_ct_send or
> in
> the final result a backend op is needed. From seqno assignment, issue
> needs to be atomic or issues could get reordered breaking how we
> signal
> fences. ofc we could fix that part a I think a mutex here is easiest
> way
> to go.

No no you're right here, sorry. Let me rework this and I'll repost.

Thanks,
Stuart

> 
> Matt 
> 
> > Thanks,
> > Stuart
> > 
> > On Wed, 2025-08-06 at 22:23 +0000, stuartsummers wrote:
> > > We already have a lock tracking the fences/sequence numbers
> > > here (pending_lock). And the GuC CT code already has an
> > > implicit version of this lock in the ct_send routine.
> > > Prepare the way for future optimizations in TLB invalidation
> > > flow by moving the mutex lock down into the GuC CT send
> > > routine rather than in the upper TLB invalidation layer.
> > > 
> > > Signed-off-by: stuartsummers <stuart.summers at intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 8 ++------
> > >  1 file changed, 2 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > index 02f0bb92d6e0..230f30161395 100644
> > > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > @@ -158,7 +158,6 @@ void xe_gt_tlb_invalidation_reset(struct
> > > xe_gt
> > > *gt)
> > >          * appear.
> > >          */
> > >  
> > > -       mutex_lock(&gt->uc.guc.ct.lock);
> > >         spin_lock_irq(&gt->tlb_invalidation.pending_lock);
> > >         cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
> > >         /*
> > > @@ -178,7 +177,6 @@ void xe_gt_tlb_invalidation_reset(struct
> > > xe_gt
> > > *gt)
> > >                                  &gt-
> > > > tlb_invalidation.pending_fences, link)
> > >                 invalidation_fence_signal(gt_to_xe(gt), fence);
> > >         spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
> > > -       mutex_unlock(&gt->uc.guc.ct.lock);
> > >  }
> > >  
> > >  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int
> > > seqno)
> > > @@ -211,13 +209,12 @@ static int send_tlb_invalidation(struct
> > > xe_guc
> > > *guc,
> > >          * need to be updated.
> > >          */
> > >  
> > > -       mutex_lock(&guc->ct.lock);
> > >         seqno = gt->tlb_invalidation.seqno;
> > >         fence->seqno = seqno;
> > >         trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
> > >         action[1] = seqno;
> > > -       ret = xe_guc_ct_send_locked(&guc->ct, action, len,
> > > -                                   G2H_LEN_DW_TLB_INVALIDATE,
> > > 1);
> > > +       ret = xe_guc_ct_send(&guc->ct, action, len,
> > > +                            G2H_LEN_DW_TLB_INVALIDATE, 1);
> > >         if (!ret) {
> > >                 spin_lock_irq(&gt-
> > > >tlb_invalidation.pending_lock);
> > >                 /*
> > > @@ -248,7 +245,6 @@ static int send_tlb_invalidation(struct
> > > xe_guc
> > > *guc,
> > >                 if (!gt->tlb_invalidation.seqno)
> > >                         gt->tlb_invalidation.seqno = 1;
> > >         }
> > > -       mutex_unlock(&guc->ct.lock);
> > >         xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
> > >  
> > >         return ret;
> > 



More information about the Intel-xe mailing list