<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    On 5/16/2022 00:59, Jani Nikula wrote:<br>
    <blockquote type="cite" cite="mid:874k1pj4bi.fsf@intel.com">
      <pre class="moz-quote-pre" wrap="">On Sat, 14 May 2022, Vinay Belgaumkar <a class="moz-txt-link-rfc2396E" href="mailto:vinay.belgaumkar@intel.com"><vinay.belgaumkar@intel.com></a> wrote:
</pre>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">SLPC min/max frequency updates require H2G calls. We are seeing
timeouts when GuC channel is backed up and it is unable to respond
in a timely fashion causing warnings and affecting CI.

This is seen when waitboosting happens during a stress test.
this patch updates the waitboost path to use a non-blocking
H2G call instead, which returns as soon as the message is
successfully transmitted.

v2: Use drm_notice to report any errors that might occur while
sending the waitboost H2G request (Tvrtko)

Signed-off-by: Vinay Belgaumkar <a class="moz-txt-link-rfc2396E" href="mailto:vinay.belgaumkar@intel.com"><vinay.belgaumkar@intel.com></a>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 44 +++++++++++++++++----
 1 file changed, 36 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
index 1db833da42df..e5e869c96262 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
@@ -98,6 +98,30 @@ static u32 slpc_get_state(struct intel_guc_slpc *slpc)
        return data->header.global_state;
 }
 
+static int guc_action_slpc_set_param_nb(struct intel_guc *guc, u8 id, u32 value)
+{
+       u32 request[] = {
</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
static const

</pre>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">+          GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST,
+               SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2),
+               id,
+               value,
+       };
+       int ret;
+
+       ret = intel_guc_send_nb(guc, request, ARRAY_SIZE(request), 0);
+
+       return ret > 0 ? -EPROTO : ret;
+}
+
+static int slpc_set_param_nb(struct intel_guc_slpc *slpc, u8 id, u32 value)
+{
+       struct intel_guc *guc = slpc_to_guc(slpc);
+
+       GEM_BUG_ON(id >= SLPC_MAX_PARAM);
+
+       return guc_action_slpc_set_param_nb(guc, id, value);
+}
+
 static int guc_action_slpc_set_param(struct intel_guc *guc, u8 id, u32 value)
 {
        u32 request[] = {
</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
Ditto here, and the whole gt/uc directory seems to have tons of these
u32 action/request array variables on stack, with the required
initialization, that could be in rodata.

Please fix all of them.

BR,
Jani.
</pre>
    </blockquote>
    But the only constant is the action code. Everything else is
    parameters and will be different on each call. <br>
    <br>
    You mean something like this?<br>
    <blockquote>static const u32 template[] = {<br>
          action,<br>
      };<br>
      u32 *request = kmalloc_array(sizeof(*request), 4);<br>
      memcpy(request, template, sizeof(*request) * 1);<br>
      request[1] = param0;<br>
      request[2] = param1;<br>
      request[3] = param2;<br>
      ret = send(request);<br>
      kfree(request);<br>
      return ret;<br>
    </blockquote>
    <br>
    Not seeing how that would be an improvement. It's a lot more code, a
    lot less readable, more prone to bugs due to incorrect structure
    sizes and/or params in the wrong place. The current version is easy
    to read and therefore to maintain, almost impossible to get wrong,
    and only puts a few words on the stack. I think the largest request
    is region of 15 words? I'm not seeing what the problem is.<br>
    <br>
    John.<br>
    <br>
    <br>
    <blockquote type="cite" cite="mid:874k1pj4bi.fsf@intel.com">
      <pre class="moz-quote-pre" wrap="">
</pre>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">@@ -208,12 +232,10 @@ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
         */
 
        with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
-               ret = slpc_set_param(slpc,
-                                    SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
-                                    freq);
-               if (ret)
-                       i915_probe_error(i915, "Unable to force min freq to %u: %d",
-                                        freq, ret);
+               /* Non-blocking request will avoid stalls */
+               ret = slpc_set_param_nb(slpc,
+                                       SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+                                       freq);
        }
 
        return ret;
@@ -222,6 +244,8 @@ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
 static void slpc_boost_work(struct work_struct *work)
 {
        struct intel_guc_slpc *slpc = container_of(work, typeof(*slpc), boost_work);
+       struct drm_i915_private *i915 = slpc_to_i915(slpc);
+       int err;
 
        /*
         * Raise min freq to boost. It's possible that
@@ -231,8 +255,12 @@ static void slpc_boost_work(struct work_struct *work)
         */
        mutex_lock(&slpc->lock);
        if (atomic_read(&slpc->num_waiters)) {
-               slpc_force_min_freq(slpc, slpc->boost_freq);
-               slpc->num_boosts++;
+               err = slpc_force_min_freq(slpc, slpc->boost_freq);
+               if (!err)
+                       slpc->num_boosts++;
+               else
+                       drm_notice(&i915->drm, "Failed to send waitboost request (%d)\n",
+                                  err);
        }
        mutex_unlock(&slpc->lock);
 }
</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
</pre>
    </blockquote>
    <br>
  </body>
</html>