[Intel-xe] [PATCH v6 08/11] drm/xe/tlb: also update seqno_recv during reset
Matthew Auld
matthew.auld at intel.com
Mon Jul 10 09:40:46 UTC 2023
We might have various kworkers waiting for TLB flushes to complete which
are not tracked with an explicit TLB fence, however at this stage that
will never happen since the CT is already disabled, so make sure we
signal them here under the assumption that we have completed a full GT
reset.
v2:
- We need to use seqno - 1 here. After acquiring ct->lock the seqno is
actually the next users seqno and not the pending one.
Signed-off-by: Matthew Auld <matthew.auld at intel.com>
Cc: Matthew Brost <matthew.brost at intel.com>
Cc: José Roberto de Souza <jose.souza at intel.com>
Reviewed-by: Matthew Brost <matthew.brost at intel.com>
---
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 24 +++++++++++++++++++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index de65b0b69679..be2388cb7205 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -86,13 +86,33 @@ invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence)
*
* Signal any pending invalidation fences, should be called during a GT reset
*/
- void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
+void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
{
struct xe_gt_tlb_invalidation_fence *fence, *next;
+ struct xe_guc *guc = >->uc.guc;
+ int pending_seqno;
+ /*
+ * CT channel is already disabled at this point. No new TLB requests can
+ * appear.
+ */
+
+ mutex_lock(>->uc.guc.ct.lock);
cancel_delayed_work(>->tlb_invalidation.fence_tdr);
+ /*
+ * We might have various kworkers waiting for TLB flushes to complete
+ * which are not tracked with an explicit TLB fence, however at this
+ * stage that will never happen since the CT is already disabled, so
+ * make sure we signal them here under the assumption that we have
+ * completed a full GT reset.
+ */
+ if (gt->tlb_invalidation.seqno == 1)
+ pending_seqno = TLB_INVALIDATION_SEQNO_MAX - 1;
+ else
+ pending_seqno = gt->tlb_invalidation.seqno - 1;
+ WRITE_ONCE(gt->tlb_invalidation.seqno_recv, pending_seqno);
+ wake_up_all(&guc->ct.wq);
- mutex_lock(>->uc.guc.ct.lock);
list_for_each_entry_safe(fence, next,
>->tlb_invalidation.pending_fences, link)
invalidation_fence_signal(fence);
--
2.41.0
More information about the Intel-xe
mailing list