[PATCH 1/8] drm/xe: Move explicit CT lock in TLB invalidation sequence

Summers, Stuart stuart.summers at intel.com
Thu Aug 7 17:10:38 UTC 2025


Mi Matt,

Any thoughts here?

Thanks,
Stuart

On Wed, 2025-08-06 at 22:23 +0000, stuartsummers wrote:
> We already have a lock tracking the fences/sequence numbers
> here (pending_lock). And the GuC CT code already has an
> implicit version of this lock in the ct_send routine.
> Prepare the way for future optimizations in TLB invalidation
> flow by moving the mutex lock down into the GuC CT send
> routine rather than in the upper TLB invalidation layer.
> 
> Signed-off-by: stuartsummers <stuart.summers at intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> index 02f0bb92d6e0..230f30161395 100644
> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> @@ -158,7 +158,6 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt
> *gt)
>          * appear.
>          */
>  
> -       mutex_lock(&gt->uc.guc.ct.lock);
>         spin_lock_irq(&gt->tlb_invalidation.pending_lock);
>         cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
>         /*
> @@ -178,7 +177,6 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt
> *gt)
>                                  &gt-
> >tlb_invalidation.pending_fences, link)
>                 invalidation_fence_signal(gt_to_xe(gt), fence);
>         spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
> -       mutex_unlock(&gt->uc.guc.ct.lock);
>  }
>  
>  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
> @@ -211,13 +209,12 @@ static int send_tlb_invalidation(struct xe_guc
> *guc,
>          * need to be updated.
>          */
>  
> -       mutex_lock(&guc->ct.lock);
>         seqno = gt->tlb_invalidation.seqno;
>         fence->seqno = seqno;
>         trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
>         action[1] = seqno;
> -       ret = xe_guc_ct_send_locked(&guc->ct, action, len,
> -                                   G2H_LEN_DW_TLB_INVALIDATE, 1);
> +       ret = xe_guc_ct_send(&guc->ct, action, len,
> +                            G2H_LEN_DW_TLB_INVALIDATE, 1);
>         if (!ret) {
>                 spin_lock_irq(&gt->tlb_invalidation.pending_lock);
>                 /*
> @@ -248,7 +245,6 @@ static int send_tlb_invalidation(struct xe_guc
> *guc,
>                 if (!gt->tlb_invalidation.seqno)
>                         gt->tlb_invalidation.seqno = 1;
>         }
> -       mutex_unlock(&guc->ct.lock);
>         xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
>  
>         return ret;



More information about the Intel-xe mailing list