[PATCH] drm/xe: Fix drm-next merge
Lucas De Marchi
lucas.demarchi at intel.com
Mon Nov 4 18:31:04 UTC 2024
Fix merge in commit c787c2901e2c ("Merge drm/drm-next into
drm-xe-next"). That workaround needs to be done only once.
Fixes: c787c2901e2c ("Merge drm/drm-next into drm-xe-next")
Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
---
drivers/gpu/drm/xe/xe_guc_ct.c | 19 -------------------
1 file changed, 19 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index a405b9218ad2d..550eeed43903b 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -1017,7 +1017,6 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
}
ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
-
if (!ret) {
LNL_FLUSH_WORK(&ct->g2h_worker);
if (g2h_fence.done) {
@@ -1027,24 +1026,6 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
}
}
- /*
- * Occasionally it is seen that the G2H worker starts running after a delay of more than
- * a second even after being queued and activated by the Linux workqueue subsystem. This
- * leads to G2H timeout error. The root cause of issue lies with scheduling latency of
- * Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS
- * and this is beyond xe kmd.
- *
- * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
- */
- if (!ret) {
- flush_work(&ct->g2h_worker);
- if (g2h_fence.done) {
- xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
- g2h_fence.seqno, action[0]);
- ret = 1;
- }
- }
-
/*
* Ensure we serialize with completion side to prevent UAF with fence going out of scope on
* the stack, since we have no clue if it will fire after the timeout before we can erase
--
2.47.0
More information about the Intel-xe
mailing list