[PATCH 1/2] drm/xe/guc_pc: Do not stop probe or resume if GuC PC fails
Rodrigo Vivi
rodrigo.vivi at intel.com
Tue Feb 11 20:00:32 UTC 2025
On Mon, Feb 10, 2025 at 05:04:17PM -0500, Cavitt, Jonathan wrote:
> -----Original Message-----
> From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of Rodrigo Vivi
> Sent: Monday, February 10, 2025 1:07 PM
> To: intel-xe at lists.freedesktop.org
> Cc: Vivi, Rodrigo <rodrigo.vivi at intel.com>; Belgaumkar, Vinay <vinay.belgaumkar at intel.com>
> Subject: [PATCH 1/2] drm/xe/guc_pc: Do not stop probe or resume if GuC PC fails
> >
> > In a rare situation of thermal limit during resume, GuC can
> > be slow and run into delays like this:
> >
> > xe 0000:00:02.0: [drm] GT1: excessive init time: 667ms! \
> > [status = 0x8002F034, timeouts = 0]
> > xe 0000:00:02.0: [drm] GT1: excessive init time: \
> > [freq = 100MHz (req = 800MHz), before = 100MHz, \
> > perf_limit_reasons = 0x1C001000]
> > xe 0000:00:02.0: [drm] *ERROR* GT1: GuC PC Start failed
> > ------------[ cut here ]------------
> > xe 0000:00:02.0: [drm] GT1: Failed to start GuC PC: -EIO
> >
> > If this happens, this can block entirely the GPU to be used.
> > However, GPU can still be used, although the GT frequencies might be
> > messed up.
> >
> > Let's report the error, but not block the flow.
> > But, instead of just giving up and moving on, let's re-attempt a wait
> > with a very long second timeout.
> >
> > Cc: Vinay Belgaumkar <vinay.belgaumkar at intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi at intel.com>
>
> Minor nit below, but you can safely ignore it:
> Reviewed-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
>
> > ---
> > drivers/gpu/drm/xe/xe_guc_pc.c | 20 ++++++++++++--------
> > 1 file changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
> > index 02409eedb914..aa58f9ddbf84 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_pc.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_pc.c
> > @@ -114,9 +114,10 @@ static struct iosys_map *pc_to_maps(struct xe_guc_pc *pc)
> > FIELD_PREP(HOST2GUC_PC_SLPC_REQUEST_MSG_1_EVENT_ARGC, count))
> >
> > static int wait_for_pc_state(struct xe_guc_pc *pc,
> > - enum slpc_global_state state)
> > + enum slpc_global_state state,
> > + int timeout_ms)
> > {
> > - int timeout_us = 5000; /* rought 5ms, but no need for precision */
> > + int timeout_us = 1000 * timeout_ms;
>
> NIT:
> AFAICT from the comment above, this wait lacks some form of precision. It might be worth while
> to keep the comment here remarking on the lack of precision (or how the precision is unimportant
> in this instance), but I won't block on it because it's not important.
yeap, I honestly don't know where that comment originally came from...
on i915 the same code is in a define:
#define SLPC_RESET_TIMEOUT_MS 5
which seems like a cleaner approach then all these hardcoded '5'
spread over....
sending a v2...
> -Jonathan Cavitt
>
> > int slept, wait = 10;
> >
> > xe_device_assert_mem_access(pc_to_xe(pc));
> > @@ -165,7 +166,7 @@ static int pc_action_query_task_state(struct xe_guc_pc *pc)
> > };
> > int ret;
> >
> > - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING))
> > + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 5))
> > return -EAGAIN;
> >
> > /* Blocking here to ensure the results are ready before reading them */
> > @@ -188,7 +189,7 @@ static int pc_action_set_param(struct xe_guc_pc *pc, u8 id, u32 value)
> > };
> > int ret;
> >
> > - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING))
> > + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 5))
> > return -EAGAIN;
> >
> > ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0);
> > @@ -209,7 +210,7 @@ static int pc_action_unset_param(struct xe_guc_pc *pc, u8 id)
> > struct xe_guc_ct *ct = &pc_to_guc(pc)->ct;
> > int ret;
> >
> > - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING))
> > + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 5))
> > return -EAGAIN;
> >
> > ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0);
> > @@ -1033,9 +1034,12 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> > if (ret)
> > goto out;
> >
> > - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) {
> > - xe_gt_err(gt, "GuC PC Start failed\n");
> > - ret = -EIO;
> > + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 5)) {
> > + xe_gt_warn(gt, "GuC PC Start taking longer than expected\n");
> > + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 1000))
> > + xe_gt_err(gt, "GuC PC Start failed\n");
> > + /* Although GuC PC failed, do not block the usage of GPU */
> > + ret = 0;
> > goto out;
> > }
> >
> > --
> > 2.48.1
> >
> >
More information about the Intel-xe
mailing list