<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>
<body>
<div dir="auto">Hi</div>
<div dir="auto"><br>
</div>
<div dir="auto"><br>
</div>
<div id="mail-editor-reference-message-container" dir="auto"><br>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" style="font-size: 11pt;"><strong>From:</strong> Sousa, Gustavo <gustavo.sousa@intel.com><br>
<strong>Sent:</strong> Thursday, 25 May 2023, 17:06<br>
<strong>To:</strong> Govindapillai, Vinod <vinod.govindapillai@intel.com>; intel-gfx@lists.freedesktop.org <intel-gfx@lists.freedesktop.org><br>
<strong>Cc:</strong> Syrjala, Ville <ville.syrjala@intel.com>; Lisovskiy, Stanislav <stanislav.lisovskiy@intel.com>; Kahola, Mika <mika.kahola@intel.com>; Saarinen, Jani <jani.saarinen@intel.com><br>
<strong>Subject:</strong> Re: [PATCH v8 7/7] drm/i915/mtl: Add support for PM DEMAND<br>
</div>
<br>
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><font size="2"><span style="font-size:11pt;">
<div class="PlainText" dir="auto">Hi, Vinod.<br>
<br>
Thanks for the new version. I decided to take one final look at the<br>
overall patch I have found some issues yet. Sorry I didn't catch them<br>
before!<br>
<br>
Please, see my comments inline.<br>
<br>
Quoting Vinod Govindapillai (2023-05-24 20:03:42-03:00)<br>
>From: Mika Kahola <mika.kahola@intel.com><br>
><br>
>MTL introduces a new way to instruct the PUnit with<br>
>power and bandwidth requirements of DE. Add the functionality<br>
>to program the registers and handle waits using interrupts.<br>
>The current wait time for timeouts is programmed for 10 msecs to<br>
>factor in the worst case scenarios. Changes made to use REG_BIT<br>
>for a register that we touched(GEN8_DE_MISC_IER _MMIO).<br>
><br>
>Wa_14016740474 is added which applies to Xe_LPD+ display<br>
><br>
>v2: checkpatch warning fixes, simplify program pmdemand part<br>
><br>
>v3: update to dbufs and pipes values to pmdemand register(stan)<br>
> Removed the macro usage in update_pmdemand_values()<br>
><br>
>v4: move the pmdemand_pre_plane_update before cdclk update<br>
> pmdemand_needs_update included cdclk params comparisons<br>
> pmdemand_state NULL check (Gustavo)<br>
> pmdemand.o in sorted order in the makefile (Jani)<br>
> update pmdemand misc irq handler loop (Gustavo)<br>
> active phys bitmask and programming correction (Gustavo)<br>
><br>
>v5: simplify pmdemand_state structure<br>
> simplify methods to find active phys and max port clock<br>
> Timeout in case of previou pmdemand task pending (Gustavo)<br>
><br>
>v6: rebasing<br>
> updates to max_ddiclk calculations (Gustavo)<br>
> updates to active_phys count method (Gustavo)<br>
><br>
>v7: use two separate loop to iterate throug old and new<br>
> crtc states to calculate the active phys (Gustavo)<br>
><br>
>Bspec: 66451, 64636, 64602, 64603<br>
>Cc: Matt Atwood <matthew.s.atwood@intel.com><br>
>Cc: Matt Roper <matthew.d.roper@intel.com><br>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com><br>
>Cc: Gustavo Sousa <gustavo.sousa@intel.com><br>
>Signed-off-by: José Roberto de Souza <jose.souza@intel.com><br>
>Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com><br>
>Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com><br>
>Signed-off-by: Mika Kahola <mika.kahola@intel.com><br>
>Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com><br>
>Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com><br>
>---<br>
> drivers/gpu/drm/i915/Makefile | 1 +<br>
> drivers/gpu/drm/i915/display/intel_display.c | 14 +<br>
> .../gpu/drm/i915/display/intel_display_core.h | 9 +<br>
> .../drm/i915/display/intel_display_driver.c | 7 +<br>
> .../gpu/drm/i915/display/intel_display_irq.c | 23 +-<br>
> .../drm/i915/display/intel_display_power.c | 8 +<br>
> drivers/gpu/drm/i915/display/intel_pmdemand.c | 560 ++++++++++++++++++<br>
> drivers/gpu/drm/i915/display/intel_pmdemand.h | 24 +<br>
> drivers/gpu/drm/i915/i915_reg.h | 36 +-<br>
> 9 files changed, 678 insertions(+), 4 deletions(-)<br>
> create mode 100644 drivers/gpu/drm/i915/display/intel_pmdemand.c<br>
> create mode 100644 drivers/gpu/drm/i915/display/intel_pmdemand.h<br>
><br>
>diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile<br>
>index dd9ca69f4998..358463d02a57 100644<br>
>--- a/drivers/gpu/drm/i915/Makefile<br>
>+++ b/drivers/gpu/drm/i915/Makefile<br>
>@@ -273,6 +273,7 @@ i915-y += \<br>
> display/intel_pch_display.o \<br>
> display/intel_pch_refclk.o \<br>
> display/intel_plane_initial.o \<br>
>+ display/intel_pmdemand.o \<br>
> display/intel_psr.o \<br>
> display/intel_quirks.o \<br>
> display/intel_sprite.o \<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c<br>
>index 0490c6412ab5..b3bb2c607650 100644<br>
>--- a/drivers/gpu/drm/i915/display/intel_display.c<br>
>+++ b/drivers/gpu/drm/i915/display/intel_display.c<br>
>@@ -99,6 +99,7 @@<br>
> #include "intel_pcode.h"<br>
> #include "intel_pipe_crc.h"<br>
> #include "intel_plane_initial.h"<br>
>+#include "intel_pmdemand.h"<br>
> #include "intel_pps.h"<br>
> #include "intel_psr.h"<br>
> #include "intel_sdvo.h"<br>
>@@ -6343,6 +6344,10 @@ int intel_atomic_check(struct drm_device *dev,<br>
> return ret;<br>
> }<br>
> <br>
>+ ret = intel_pmdemand_atomic_check(state);<br>
>+ if (ret)<br>
>+ goto fail;<br>
>+<br>
> ret = intel_atomic_check_crtcs(state);<br>
> if (ret)<br>
> goto fail;<br>
>@@ -6988,6 +6993,14 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)<br>
> for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i)<br>
> crtc->config = new_crtc_state;<br>
> <br>
>+ /*<br>
>+ * In XE_LPD+ Pmdemand combines many parameters such as voltage index,<br>
>+ * plls, cdclk frequency, QGV point selection parameter etc. Voltage<br>
>+ * index, cdclk/ddiclk frequencies are supposed to be configured before<br>
>+ * the cdclk config is set.<br>
>+ */<br>
>+ intel_pmdemand_pre_plane_update(state);<br>
>+<br>
> if (state->modeset) {<br>
> drm_atomic_helper_update_legacy_modeset_state(dev, &state->base);<br>
> <br>
>@@ -7107,6 +7120,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)<br>
> intel_verify_planes(state);<br>
> <br>
> intel_sagv_post_plane_update(state);<br>
>+ intel_pmdemand_post_plane_update(state);<br>
> <br>
> drm_atomic_helper_commit_hw_done(&state->base);<br>
> <br>
>diff --git a/drivers/gpu/drm/i915/display/intel_display_core.h b/drivers/gpu/drm/i915/display/intel_display_core.h<br>
>index 9f66d734edf6..ae45b2c42eb1 100644<br>
>--- a/drivers/gpu/drm/i915/display/intel_display_core.h<br>
>+++ b/drivers/gpu/drm/i915/display/intel_display_core.h<br>
>@@ -345,6 +345,15 @@ struct intel_display {<br>
> struct intel_global_obj obj;<br>
> } dbuf;<br>
> <br>
>+ struct {<br>
>+ wait_queue_head_t waitqueue;<br>
>+<br>
>+ /* mutex to protect pmdemand programming sequence */<br>
>+ struct mutex lock;<br>
>+<br>
>+ struct intel_global_obj obj;<br>
>+ } pmdemand;<br>
>+<br>
> struct {<br>
> /*<br>
> * dkl.phy_lock protects against concurrent access of the<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_display_driver.c b/drivers/gpu/drm/i915/display/intel_display_driver.c<br>
>index 60ce10fc7205..dc8de861339d 100644<br>
>--- a/drivers/gpu/drm/i915/display/intel_display_driver.c<br>
>+++ b/drivers/gpu/drm/i915/display/intel_display_driver.c<br>
>@@ -47,6 +47,7 @@<br>
> #include "intel_opregion.h"<br>
> #include "intel_overlay.h"<br>
> #include "intel_plane_initial.h"<br>
>+#include "intel_pmdemand.h"<br>
> #include "intel_pps.h"<br>
> #include "intel_quirks.h"<br>
> #include "intel_vga.h"<br>
>@@ -211,6 +212,8 @@ int intel_display_driver_probe_noirq(struct drm_i915_private *i915)<br>
> if (ret < 0)<br>
> goto cleanup_vga;<br>
> <br>
>+ intel_pmdemand_init_early(i915);<br>
>+<br>
> intel_power_domains_init_hw(i915, false);<br>
> <br>
> if (!HAS_DISPLAY(i915))<br>
>@@ -240,6 +243,10 @@ int intel_display_driver_probe_noirq(struct drm_i915_private *i915)<br>
> if (ret)<br>
> goto cleanup_vga_client_pw_domain_dmc;<br>
> <br>
>+ ret = intel_pmdemand_init(i915);<br>
>+ if (ret)<br>
>+ goto cleanup_vga_client_pw_domain_dmc;<br>
>+<br>
> init_llist_head(&i915->display.atomic_helper.free_list);<br>
> INIT_WORK(&i915->display.atomic_helper.free_work,<br>
> intel_atomic_helper_free_state_worker);<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_display_irq.c b/drivers/gpu/drm/i915/display/intel_display_irq.c<br>
>index 3b2a287d2041..0b3739310f81 100644<br>
>--- a/drivers/gpu/drm/i915/display/intel_display_irq.c<br>
>+++ b/drivers/gpu/drm/i915/display/intel_display_irq.c<br>
>@@ -18,6 +18,7 @@<br>
> #include "intel_fifo_underrun.h"<br>
> #include "intel_gmbus.h"<br>
> #include "intel_hotplug_irq.h"<br>
>+#include "intel_pmdemand.h"<br>
> #include "intel_psr.h"<br>
> #include "intel_psr_regs.h"<br>
> <br>
>@@ -827,12 +828,27 @@ static u32 gen8_de_pipe_fault_mask(struct drm_i915_private *dev_priv)<br>
> return GEN8_DE_PIPE_IRQ_FAULT_ERRORS;<br>
> }<br>
> <br>
>+static void intel_pmdemand_irq_handler(struct drm_i915_private *dev_priv)<br>
>+{<br>
>+ wake_up_all(&dev_priv->display.pmdemand.waitqueue);<br>
>+}<br>
>+<br>
> static void<br>
> gen8_de_misc_irq_handler(struct drm_i915_private *dev_priv, u32 iir)<br>
> {<br>
> bool found = false;<br>
> <br>
>- if (iir & GEN8_DE_MISC_GSE) {<br>
>+ if (DISPLAY_VER(dev_priv) >= 14) {<br>
>+ if (iir & (XELPDP_PMDEMAND_RSP |<br>
>+ XELPDP_PMDEMAND_RSPTOUT_ERR)) {<br>
>+ if (iir & XELPDP_PMDEMAND_RSPTOUT_ERR)<br>
>+ drm_dbg(&dev_priv->drm,<br>
>+ "Error waiting for Punit PM Demand Response\n");<br>
>+<br>
>+ intel_pmdemand_irq_handler(dev_priv);<br>
>+ found = true;<br>
>+ }<br>
>+ } else if (iir & GEN8_DE_MISC_GSE) {<br>
> intel_opregion_asle_intr(dev_priv);<br>
> found = true;<br>
> }<br>
>@@ -1576,7 +1592,10 @@ void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)<br>
> if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv))<br>
> de_port_masked |= BXT_DE_PORT_GMBUS;<br>
> <br>
>- if (DISPLAY_VER(dev_priv) >= 11) {<br>
>+ if (DISPLAY_VER(dev_priv) >= 14) {<br>
>+ de_misc_masked |= XELPDP_PMDEMAND_RSPTOUT_ERR |<br>
>+ XELPDP_PMDEMAND_RSP;<br>
>+ } else if (DISPLAY_VER(dev_priv) >= 11) {<br>
> enum port port;<br>
> <br>
> if (intel_bios_is_dsi_present(dev_priv, &port))<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c<br>
>index 6ed2ece89c3f..59de308234a6 100644<br>
>--- a/drivers/gpu/drm/i915/display/intel_display_power.c<br>
>+++ b/drivers/gpu/drm/i915/display/intel_display_power.c<br>
>@@ -20,6 +20,7 @@<br>
> #include "intel_mchbar_regs.h"<br>
> #include "intel_pch_refclk.h"<br>
> #include "intel_pcode.h"<br>
>+#include "intel_pmdemand.h"<br>
> #include "intel_pps_regs.h"<br>
> #include "intel_snps_phy.h"<br>
> #include "skl_watermark.h"<br>
>@@ -1085,6 +1086,10 @@ static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)<br>
> dev_priv->display.dbuf.enabled_slices =<br>
> intel_enabled_dbuf_slices_mask(dev_priv);<br>
> <br>
>+ if (DISPLAY_VER(dev_priv) >= 14)<br>
>+ intel_program_dbuf_pmdemand(dev_priv, BIT(DBUF_S1) |<br>
>+ dev_priv->display.dbuf.enabled_slices);<br>
>+<br>
> /*<br>
> * Just power up at least 1 slice, we will<br>
> * figure out later which slices we have and what we need.<br>
>@@ -1096,6 +1101,9 @@ static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)<br>
> static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)<br>
> {<br>
> gen9_dbuf_slices_update(dev_priv, 0);<br>
>+<br>
>+ if (DISPLAY_VER(dev_priv) >= 14)<br>
>+ intel_program_dbuf_pmdemand(dev_priv, 0);<br>
> }<br>
> <br>
> static void gen12_dbuf_slices_config(struct drm_i915_private *dev_priv)<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_pmdemand.c b/drivers/gpu/drm/i915/display/intel_pmdemand.c<br>
>new file mode 100644<br>
>index 000000000000..01ec4e648de9<br>
>--- /dev/null<br>
>+++ b/drivers/gpu/drm/i915/display/intel_pmdemand.c<br>
>@@ -0,0 +1,560 @@<br>
>+// SPDX-License-Identifier: MIT<br>
>+/*<br>
>+ * Copyright © 2023 Intel Corporation<br>
>+ */<br>
>+<br>
>+#include <linux/bitops.h><br>
>+<br>
>+#include "i915_drv.h"<br>
>+#include "i915_reg.h"<br>
>+#include "intel_bw.h"<br>
>+#include "intel_cdclk.h"<br>
>+#include "intel_cx0_phy.h"<br>
>+#include "intel_de.h"<br>
>+#include "intel_display.h"<br>
>+#include "intel_display_trace.h"<br>
>+#include "intel_pmdemand.h"<br>
>+#include "skl_watermark.h"<br>
>+<br>
>+struct pmdemand_params {<br>
>+ u16 qclk_gv_bw;<br>
>+ u8 voltage_index;<br>
>+ u8 qclk_gv_index;<br>
>+ u8 active_pipes;<br>
>+ u8 dbufs;<br>
<br>
Hmm... Looks like this is not being used anymore.<br>
<br>
>+ /* Total number of non type C active phys from active_phys_mask */<br>
>+ u8 active_phys;<br>
>+ u16 cdclk_freq_mhz;<br>
>+ /* max from ddi_clocks[]*/<br>
>+ u16 ddiclk_max;<br>
>+ u8 scalers;<br>
>+};<br>
>+<br>
>+struct intel_pmdemand_state {<br>
>+ struct intel_global_state base;<br>
>+<br>
>+ /* Maintain a persistent list of port clocks across all crtcs */<br>
>+ int ddi_clocks[I915_MAX_PIPES];<br>
>+<br>
>+ /* Maintain a persistent list of non type C phys mask */<br>
>+ u16 active_phys_mask;<br>
>+<br>
>+ /* Parameters to be configured in the pmdemand registers */<br>
>+ struct pmdemand_params params;<br>
>+};<br>
>+<br>
>+#define to_intel_pmdemand_state(x) container_of((x), \<br>
>+ struct intel_pmdemand_state, \<br>
>+ base)<br>
>+static struct intel_global_state *<br>
>+intel_pmdemand_duplicate_state(struct intel_global_obj *obj)<br>
>+{<br>
>+ struct intel_pmdemand_state *pmdmnd_state;<br>
>+<br>
>+ pmdmnd_state = kmemdup(obj->state, sizeof(*pmdmnd_state), GFP_KERNEL);<br>
>+ if (!pmdmnd_state)<br>
>+ return NULL;<br>
>+<br>
>+ return &pmdmnd_state->base;<br>
>+}<br>
>+<br>
>+static void intel_pmdemand_destroy_state(struct intel_global_obj *obj,<br>
>+ struct intel_global_state *state)<br>
>+{<br>
>+ kfree(state);<br>
>+}<br>
>+<br>
>+static const struct intel_global_state_funcs intel_pmdemand_funcs = {<br>
>+ .atomic_duplicate_state = intel_pmdemand_duplicate_state,<br>
>+ .atomic_destroy_state = intel_pmdemand_destroy_state,<br>
>+};<br>
>+<br>
>+static struct intel_pmdemand_state *<br>
>+intel_atomic_get_pmdemand_state(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ struct intel_global_state *pmdemand_state =<br>
>+ intel_atomic_get_global_obj_state(state,<br>
>+ &i915->display.pmdemand.obj);<br>
>+<br>
>+ if (IS_ERR(pmdemand_state))<br>
>+ return ERR_CAST(pmdemand_state);<br>
>+<br>
>+ return to_intel_pmdemand_state(pmdemand_state);<br>
>+}<br>
>+<br>
>+static struct intel_pmdemand_state *<br>
>+intel_atomic_get_old_pmdemand_state(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ struct intel_global_state *pmdemand_state =<br>
>+ intel_atomic_get_old_global_obj_state(state,<br>
>+ &i915->display.pmdemand.obj);<br>
>+<br>
>+ if (!pmdemand_state)<br>
>+ return NULL;<br>
>+<br>
>+ return to_intel_pmdemand_state(pmdemand_state);<br>
>+}<br>
>+<br>
>+static struct intel_pmdemand_state *<br>
>+intel_atomic_get_new_pmdemand_state(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ struct intel_global_state *pmdemand_state =<br>
>+ intel_atomic_get_new_global_obj_state(state,<br>
>+ &i915->display.pmdemand.obj);<br>
>+<br>
>+ if (!pmdemand_state)<br>
>+ return NULL;<br>
>+<br>
>+ return to_intel_pmdemand_state(pmdemand_state);<br>
>+}<br>
>+<br>
>+int intel_pmdemand_init(struct drm_i915_private *i915)<br>
>+{<br>
>+ struct intel_pmdemand_state *pmdemand_state;<br>
>+<br>
>+ pmdemand_state = kzalloc(sizeof(*pmdemand_state), GFP_KERNEL);<br>
>+ if (!pmdemand_state)<br>
>+ return -ENOMEM;<br>
>+<br>
>+ intel_atomic_global_obj_init(i915, &i915->display.pmdemand.obj,<br>
>+ &pmdemand_state->base,<br>
>+ &intel_pmdemand_funcs);<br>
>+<br>
>+ if (IS_MTL_DISPLAY_STEP(i915, STEP_A0, STEP_C0))<br>
>+ /* Wa_14016740474 */<br>
>+ intel_de_rmw(i915, XELPD_CHICKEN_DCPR_3, 0, DMD_RSP_TIMEOUT_DISABLE);<br>
>+<br>
>+ return 0;<br>
>+}<br>
>+<br>
>+void intel_pmdemand_init_early(struct drm_i915_private *i915)<br>
>+{<br>
>+ mutex_init(&i915->display.pmdemand.lock);<br>
>+ init_waitqueue_head(&i915->display.pmdemand.waitqueue);<br>
>+}<br>
>+<br>
>+static void pmdemand_update_max_ddiclk(struct intel_atomic_state *state,<br>
>+ struct intel_pmdemand_state *pmd_state)<br>
>+{<br>
>+ int max_ddiclk = 0;<br>
>+ struct intel_crtc *crtc;<br>
>+ int i;<br>
>+ const struct intel_crtc_state *new_crtc_state;<br>
>+<br>
>+ for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i)<br>
>+ pmd_state->ddi_clocks[crtc->pipe] = new_crtc_state->port_clock;<br>
>+<br>
>+ for (i = 0; i < ARRAY_SIZE(pmd_state->ddi_clocks); i++)<br>
>+ max_ddiclk = max(pmd_state->ddi_clocks[i], max_ddiclk);<br>
>+<br>
>+ pmd_state->params.ddiclk_max = DIV_ROUND_UP(max_ddiclk, 1000);<br>
>+}<br>
>+<br>
>+static struct intel_encoder *<br>
>+pmdemand_get_crtc_old_encoder(const struct intel_atomic_state *state,<br>
>+ const struct intel_crtc_state *crtc_state)<br>
>+{<br>
>+ const struct drm_connector_state *connector_state;<br>
>+ const struct drm_connector *connector;<br>
>+ struct intel_encoder *encoder = NULL;<br>
>+ struct intel_crtc *master_crtc;<br>
>+ int i;<br>
>+<br>
>+ master_crtc = intel_master_crtc(crtc_state);<br>
>+<br>
>+ for_each_old_connector_in_state(&state->base, connector, connector_state, i) {<br>
>+ if (connector_state->crtc != &master_crtc->base)<br>
>+ continue;<br>
>+<br>
>+ encoder = to_intel_encoder(connector_state->best_encoder);<br>
>+ }<br>
>+<br>
>+ return encoder;<br>
>+}<br>
>+<br>
>+static void<br>
>+pmdemand_update_active_non_tc_phys(struct drm_i915_private *i915,<br>
>+ const struct intel_atomic_state *state,<br>
>+ struct intel_pmdemand_state *pmd_state)<br>
>+{<br>
>+ struct intel_crtc *crtc;<br>
>+ struct intel_encoder *encoder;<br>
>+ int i;<br>
>+ const struct intel_crtc_state *new_crtc_state, *old_crtc_state;<br>
>+ enum phy phy;<br>
>+<br>
<br>
Since we are probably doing a re-spin of this patch, maybe it is worth<br>
adding a short comment here explaining why we do the 2 loops? Just to<br>
make sure we don't get future patches trying to simplify this into a<br>
single loop.<br>
<br>
>+ for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,<br>
>+ new_crtc_state, i) {<br>
>+ if (!intel_crtc_needs_modeset(new_crtc_state))<br>
>+ continue;<br>
>+<br>
>+ if (!old_crtc_state->hw.active)<br>
>+ continue;<br>
>+<br>
>+ encoder = pmdemand_get_crtc_old_encoder(state, old_crtc_state);<br>
>+ if (!encoder)<br>
>+ continue;<br>
>+<br>
>+ phy = intel_port_to_phy(i915, encoder->port);<br>
>+<br>
>+ pmd_state->active_phys_mask &= ~BIT(phy);<br>
>+ }<br>
>+<br>
>+ for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,<br>
>+ new_crtc_state, i) {<br>
>+ if (!intel_crtc_needs_modeset(new_crtc_state))<br>
>+ continue;<br>
>+<br>
>+ if (!new_crtc_state->hw.active)<br>
>+ continue;<br>
>+<br>
>+ encoder = intel_get_crtc_new_encoder(state, new_crtc_state);<br>
>+ if (!encoder)<br>
>+ continue;<br>
>+<br>
>+ phy = intel_port_to_phy(i915, encoder->port);<br>
>+<br>
>+ if (intel_phy_is_tc(i915, phy))<br>
>+ continue;<br>
>+<br>
>+ pmd_state->active_phys_mask |= BIT(phy);<br>
>+ }<br>
>+<br>
>+ pmd_state->params.active_phys = hweight16(pmd_state->active_phys_mask);<br>
>+}<br>
>+<br>
>+static bool pmdemand_needs_update(struct intel_atomic_state *state)<br>
>+{<br>
>+ bool states_checked = false;<br>
>+ struct intel_crtc *crtc;<br>
>+ int i;<br>
>+ const struct intel_crtc_state *new_crtc_state, *old_crtc_state;<br>
>+<br>
>+ for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,<br>
>+ new_crtc_state, i) {<br>
>+ const struct intel_bw_state *new_bw_state, *old_bw_state;<br>
>+ const struct intel_cdclk_state *new_cdclk_state;<br>
>+ const struct intel_cdclk_state *old_cdclk_state;<br>
>+ const struct intel_dbuf_state *new_dbuf_state, *old_dbuf_state;<br>
>+<br>
>+ if (old_crtc_state->port_clock != new_crtc_state->port_clock)<br>
>+ return true;<br>
>+<br>
>+ /*<br>
>+ * For the below settings once through the loop is enough.<br>
>+ * Some pmdemand_atomic_check calls might trigger read lock not<br>
>+ * taken assert if these following checks are kept outside this<br>
>+ * loop.<br>
>+ */<br>
>+ if (states_checked)<br>
>+ continue;<br>
>+<br>
>+ new_bw_state = intel_atomic_get_new_bw_state(state);<br>
>+ old_bw_state = intel_atomic_get_old_bw_state(state);<br>
>+ if (new_bw_state && new_bw_state->qgv_point_peakbw !=<br>
>+ old_bw_state->qgv_point_peakbw)<br>
>+ return true;<br>
>+<br>
>+ new_dbuf_state = intel_atomic_get_new_dbuf_state(state);<br>
>+ old_dbuf_state = intel_atomic_get_old_dbuf_state(state);<br>
>+ if (new_dbuf_state && new_dbuf_state->active_pipes !=<br>
>+ old_dbuf_state->active_pipes)<br>
>+ return true;<br>
>+<br>
>+ new_cdclk_state = intel_atomic_get_new_cdclk_state(state);<br>
>+ old_cdclk_state = intel_atomic_get_old_cdclk_state(state);<br>
>+ if (new_cdclk_state &&<br>
>+ (new_cdclk_state->logical.cdclk !=<br>
>+ old_cdclk_state->logical.cdclk ||<br>
>+ new_cdclk_state->logical.voltage_level !=<br>
>+ old_cdclk_state->logical.voltage_level))<br>
>+ return true;<br>
>+<br>
>+ states_checked = true;<br>
>+ }<br>
>+<br>
>+ return false;<br>
<br>
I'm afraid we are missing one thing in this function: we need to know<br>
whether the number of active non-TC PHYs could change and return true if<br>
so, otherwise we could end up skipping a required PM Demand transaction.<br>
<br>
I would implement that using the same 2-loop strategy from<br>
pmdemand_update_active_non_tc_phys() but having two bitmasks: one for<br>
PHYs of CTRCs from old state being disabled (first loop) and another for<br>
active CRTCs in the new state (second loop). If those masks match each<br>
other, that means the number of active PHYs will not change, otherwise<br>
there is a possibility that it will change.<br>
<br>
Perhaps we should rename and change<br>
pmdemand_update_active_non_tc_phys()'s implementation to fill those two<br>
masks and then we could use them appropriately depending on where that<br>
function is called. For example, if we rename and call it like<br>
<br>
intel_pmdmand_calc_non_tc_phy_masks(i915, state, inactive_phys_mask,<br>
active_phys_mask)<br>
, we would:<br>
<br>
- Here in this function, return true if inactive_mask != active_mask.<br>
<br>
- In intel_pmdemand_atomic_check(), update the mask in the following<br>
order:<br>
<br>
- new_pmdemand_state->active_phys_mask &= ~inactive_phys_mask;<br>
- new_pmdemand_state->active_phys_mask |= active_phys_mask;</div>
<div class="PlainText" dir="auto"><br>
</div>
<div class="PlainText" dir="auto">Can only the phys change without impacting other parameters here?<br>
<br>
>+}<br>
>+<br>
>+int intel_pmdemand_atomic_check(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ const struct intel_bw_state *new_bw_state;<br>
>+ const struct intel_cdclk_state *new_cdclk_state;<br>
>+ const struct intel_dbuf_state *new_dbuf_state;<br>
>+ struct intel_pmdemand_state *new_pmdemand_state;<br>
>+ int ret;<br>
>+<br>
>+ if (DISPLAY_VER(i915) < 14)<br>
>+ return 0;<br>
>+<br>
>+ if (!pmdemand_needs_update(state))<br>
>+ return 0;<br>
>+<br>
>+ new_pmdemand_state = intel_atomic_get_pmdemand_state(state);<br>
>+ if (IS_ERR(new_pmdemand_state))<br>
>+ return PTR_ERR(new_pmdemand_state);<br>
>+<br>
>+ ret = intel_atomic_lock_global_state(&new_pmdemand_state->base);<br>
>+ if (ret)<br>
>+ return ret;<br>
>+<br>
>+ new_bw_state = intel_atomic_get_bw_state(state);<br>
>+ if (IS_ERR(new_bw_state))<br>
>+ return PTR_ERR(new_bw_state);<br>
>+<br>
>+ /* firmware will calculate the qclck_gc_index, requirement is set to 0 */<br>
>+ new_pmdemand_state->params.qclk_gv_index = 0;<br>
>+ new_pmdemand_state->params.qclk_gv_bw =<br>
>+ min_t(u16, new_bw_state->qgv_point_peakbw, 0xffff);<br>
>+<br>
>+ new_dbuf_state = intel_atomic_get_dbuf_state(state);<br>
>+ if (IS_ERR(new_dbuf_state))<br>
>+ return PTR_ERR(new_dbuf_state);<br>
>+<br>
>+ new_pmdemand_state->params.active_pipes =<br>
>+ min_t(u8, hweight8(new_dbuf_state->active_pipes), 3);<br>
>+<br>
>+ new_cdclk_state = intel_atomic_get_cdclk_state(state);<br>
>+ if (IS_ERR(new_cdclk_state))<br>
>+ return PTR_ERR(new_cdclk_state);<br>
>+<br>
>+ new_pmdemand_state->params.voltage_index =<br>
>+ new_cdclk_state->logical.voltage_level;<br>
>+ new_pmdemand_state->params.cdclk_freq_mhz =<br>
>+ DIV_ROUND_UP(new_cdclk_state->logical.cdclk, 1000);<br>
>+<br>
>+ pmdemand_update_max_ddiclk(state, new_pmdemand_state);<br>
>+<br>
>+ pmdemand_update_active_non_tc_phys(i915, state, new_pmdemand_state);<br>
>+<br>
>+ /*<br>
>+ * Setting scalers to max as it can not be calculated during flips and<br>
>+ * fastsets without taking global states locks.<br>
>+ */<br>
>+ new_pmdemand_state->params.scalers = 7;<br>
>+<br>
>+ ret = intel_atomic_serialize_global_state(&new_pmdemand_state->base);<br>
>+ if (ret)<br>
>+ return ret;<br>
>+<br>
>+ return 0;<br>
>+}<br>
>+<br>
>+static bool intel_pmdemand_check_prev_transaction(struct drm_i915_private *i915)<br>
>+{<br>
>+ return !(intel_de_wait_for_clear(i915,<br>
>+ XELPDP_INITIATE_PMDEMAND_REQUEST(1),<br>
>+ XELPDP_PMDEMAND_REQ_ENABLE, 10) ||<br>
>+ intel_de_wait_for_clear(i915,<br>
>+ GEN12_DCPR_STATUS_1,<br>
>+ XELPDP_PMDEMAND_INFLIGHT_STATUS, 10));<br>
>+}<br>
>+<br>
>+static bool intel_pmdemand_req_complete(struct drm_i915_private *i915)<br>
>+{<br>
>+ return !(intel_de_read(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1)) &<br>
>+ XELPDP_PMDEMAND_REQ_ENABLE);<br>
>+}<br>
>+<br>
>+static int intel_pmdemand_wait(struct drm_i915_private *i915)<br>
>+{<br>
>+ DEFINE_WAIT(wait);<br>
<br>
Hm... I think this is a leftover of a previous version of this function. We<br>
should remove this line.<br>
<br>
>+ int ret;<br>
>+ const unsigned int timeout_ms = 10;<br>
>+<br>
>+ ret = wait_event_timeout(i915->display.pmdemand.waitqueue,<br>
>+ intel_pmdemand_req_complete(i915),<br>
>+ msecs_to_jiffies_timeout(timeout_ms));<br>
>+ if (ret == 0)<br>
>+ drm_err(&i915->drm,<br>
>+ "timed out waiting for Punit PM Demand Response\n");<br>
>+<br>
>+ return ret;<br>
>+}<br>
>+<br>
>+/* Required to be programmed during Display Init Sequences. */<br>
>+void intel_program_dbuf_pmdemand(struct drm_i915_private *i915,<br>
>+ u8 dbuf_slices)<br>
>+{<br>
>+ u32 dbufs = min_t(u32, hweight8(dbuf_slices), 3);<br>
>+<br>
>+ mutex_lock(&i915->display.pmdemand.lock);<br>
>+ if (drm_WARN_ON(&i915->drm,<br>
>+ !intel_pmdemand_check_prev_transaction(i915)))<br>
>+ goto unlock;<br>
>+<br>
>+ intel_de_rmw(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(0),<br>
>+ XELPDP_PMDEMAND_DBUFS_MASK, XELPDP_PMDEMAND_DBUFS(dbufs));<br>
>+ intel_de_rmw(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1), 0,<br>
>+ XELPDP_PMDEMAND_REQ_ENABLE);<br>
>+<br>
>+ intel_pmdemand_wait(i915);<br>
>+<br>
>+unlock:<br>
>+ mutex_unlock(&i915->display.pmdemand.lock);<br>
>+}<br>
>+<br>
>+static void update_pmdemand_values(const struct intel_pmdemand_state *new,<br>
>+ const struct intel_pmdemand_state *old,<br>
>+ u32 *reg1, u32 *reg2)<br>
>+{<br>
>+ u32 plls, tmp;<br>
>+<br>
>+ /*<br>
>+ * The pmdemand parameter updates happens in two steps. Pre plane and<br>
>+ * post plane updates. During the pre plane, as DE might still be<br>
>+ * handling with some old operations, to avoid unwanted performance<br>
>+ * issues, program the pmdemand parameters with higher of old and new<br>
>+ * values. And then after once settled, use the new parameter values<br>
>+ * as part of the post plane update.<br>
>+ */<br>
>+<br>
>+ /* Set 1*/<br>
>+ *reg1 &= ~XELPDP_PMDEMAND_QCLK_GV_BW_MASK;<br>
>+ tmp = old ? max(old->params.qclk_gv_bw, new->params.qclk_gv_bw) :<br>
>+ new->params.qclk_gv_bw;<br>
>+ *reg1 |= XELPDP_PMDEMAND_QCLK_GV_BW(tmp);<br>
>+<br>
>+ *reg1 &= ~XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK;<br>
>+ tmp = old ? max(old->params.voltage_index, new->params.voltage_index) :<br>
>+ new->params.voltage_index;<br>
>+ *reg1 |= XELPDP_PMDEMAND_VOLTAGE_INDEX(tmp);<br>
>+<br>
>+ *reg1 &= ~XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK;<br>
>+ tmp = old ? max(old->params.qclk_gv_index, new->params.qclk_gv_index) :<br>
>+ new->params.qclk_gv_index;<br>
>+ *reg1 |= XELPDP_PMDEMAND_QCLK_GV_INDEX(tmp);<br>
>+<br>
>+ *reg1 &= ~XELPDP_PMDEMAND_PIPES_MASK;<br>
>+ tmp = old ? max(old->params.active_pipes, new->params.active_pipes) :<br>
>+ new->params.active_pipes;<br>
>+ *reg1 |= XELPDP_PMDEMAND_PIPES(tmp);<br>
>+<br>
>+ *reg1 &= ~XELPDP_PMDEMAND_PHYS_MASK;<br>
>+ plls = old ? max(old->params.active_phys, new->params.active_phys) :<br>
>+ new->params.active_phys;<br>
>+ plls = min_t(u32, plls, 7);<br>
>+ *reg1 |= XELPDP_PMDEMAND_PHYS(plls);<br>
>+<br>
>+ /* Set 2*/<br>
>+ *reg2 &= ~XELPDP_PMDEMAND_CDCLK_FREQ_MASK;<br>
>+ tmp = old ? max(old->params.cdclk_freq_mhz,<br>
>+ new->params.cdclk_freq_mhz) :<br>
>+ new->params.cdclk_freq_mhz;<br>
>+ *reg2 |= XELPDP_PMDEMAND_CDCLK_FREQ(tmp);<br>
>+<br>
>+ *reg2 &= ~XELPDP_PMDEMAND_DDICLK_FREQ_MASK;<br>
>+ tmp = old ? max(old->params.ddiclk_max, new->params.ddiclk_max) :<br>
>+ new->params.ddiclk_max;<br>
>+ *reg2 |= XELPDP_PMDEMAND_DDICLK_FREQ(tmp);<br>
>+<br>
>+ *reg2 &= ~XELPDP_PMDEMAND_SCALERS_MASK;<br>
>+ tmp = old ? max(old->params.scalers, new->params.scalers) :<br>
>+ new->params.scalers;<br>
>+ *reg2 |= XELPDP_PMDEMAND_SCALERS(tmp);<br>
>+<br>
>+ /*<br>
>+ * Active_PLLs starts with 1 because of CDCLK PLL.<br>
>+ * TODO: Missing to account genlock filter when it gets used.<br>
>+ */<br>
>+ plls = min_t(u32, plls + 1, 7);<br>
>+ *reg2 &= ~XELPDP_PMDEMAND_PLLS_MASK;<br>
>+ *reg2 |= XELPDP_PMDEMAND_PLLS(plls);<br>
>+}<br>
>+<br>
>+static void intel_program_pmdemand(struct drm_i915_private *i915,<br>
>+ const struct intel_pmdemand_state *new,<br>
>+ const struct intel_pmdemand_state *old)<br>
>+{<br>
>+ bool changed = false;<br>
>+ u32 reg1, mod_reg1;<br>
>+ u32 reg2, mod_reg2;<br>
>+<br>
>+ mutex_lock(&i915->display.pmdemand.lock);<br>
>+ if (drm_WARN_ON(&i915->drm,<br>
>+ !intel_pmdemand_check_prev_transaction(i915)))<br>
>+ goto unlock;<br>
>+<br>
>+ reg1 = intel_de_read(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(0));<br>
>+ mod_reg1 = reg1;<br>
>+<br>
>+ reg2 = intel_de_read(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1));<br>
>+ mod_reg2 = reg2;<br>
>+<br>
>+ update_pmdemand_values(new, old, &mod_reg1, &mod_reg2);<br>
>+<br>
>+ if (reg1 != mod_reg1) {<br>
>+ intel_de_write(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(0),<br>
>+ mod_reg1);<br>
>+ changed = true;<br>
>+ }<br>
>+<br>
>+ if (reg2 != mod_reg2) {<br>
>+ intel_de_write(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1),<br>
>+ mod_reg2);<br>
>+ changed = true;<br>
>+ }<br>
>+<br>
>+ /* Initiate pm demand request only if register values are changed */<br>
>+ if (!changed)<br>
>+ goto unlock;<br>
>+<br>
>+ drm_dbg_kms(&i915->drm,<br>
>+ "initate pmdemand request values: (0x%x 0x%x)\n",<br>
>+ mod_reg1, mod_reg2);<br>
>+<br>
>+ intel_de_rmw(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1), 0,<br>
>+ XELPDP_PMDEMAND_REQ_ENABLE);<br>
>+<br>
>+ intel_pmdemand_wait(i915);<br>
>+<br>
>+unlock:<br>
>+ mutex_unlock(&i915->display.pmdemand.lock);<br>
>+}<br>
>+<br>
>+static bool<br>
>+intel_pmdemand_state_changed(const struct intel_pmdemand_state *new,<br>
>+ const struct intel_pmdemand_state *old)<br>
>+{<br>
>+ return memcmp(&new->params, &old->params, sizeof(new->params)) != 0;<br>
>+}<br>
>+<br>
>+void intel_pmdemand_pre_plane_update(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ const struct intel_pmdemand_state *new_pmdmnd_state =<br>
>+ intel_atomic_get_new_pmdemand_state(state);<br>
>+ const struct intel_pmdemand_state *old_pmdmnd_state =<br>
>+ intel_atomic_get_old_pmdemand_state(state);<br>
>+<br>
>+ if (DISPLAY_VER(i915) < 14)<br>
>+ return;<br>
>+<br>
>+ if (!new_pmdmnd_state ||<br>
>+ !intel_pmdemand_state_changed(new_pmdmnd_state, old_pmdmnd_state))<br>
>+ return;<br>
>+<br>
>+ intel_program_pmdemand(i915, new_pmdmnd_state, old_pmdmnd_state);<br>
>+}<br>
>+<br>
>+void intel_pmdemand_post_plane_update(struct intel_atomic_state *state)<br>
>+{<br>
>+ struct drm_i915_private *i915 = to_i915(state->base.dev);<br>
>+ const struct intel_pmdemand_state *new_pmdmnd_state =<br>
>+ intel_atomic_get_new_pmdemand_state(state);<br>
>+ const struct intel_pmdemand_state *old_pmdmnd_state =<br>
>+ intel_atomic_get_old_pmdemand_state(state);<br>
>+<br>
>+ if (DISPLAY_VER(i915) < 14)<br>
>+ return;<br>
>+<br>
>+ if (!new_pmdmnd_state ||<br>
>+ !intel_pmdemand_state_changed(new_pmdmnd_state, old_pmdmnd_state))<br>
>+ return;<br>
>+<br>
>+ intel_program_pmdemand(i915, new_pmdmnd_state, NULL);<br>
>+}<br>
>diff --git a/drivers/gpu/drm/i915/display/intel_pmdemand.h b/drivers/gpu/drm/i915/display/intel_pmdemand.h<br>
>new file mode 100644<br>
>index 000000000000..2883b5d97a44<br>
>--- /dev/null<br>
>+++ b/drivers/gpu/drm/i915/display/intel_pmdemand.h<br>
>@@ -0,0 +1,24 @@<br>
>+/* SPDX-License-Identifier: MIT */<br>
>+/*<br>
>+ * Copyright © 2023 Intel Corporation<br>
>+ */<br>
>+<br>
>+#ifndef __INTEL_PMDEMAND_H__<br>
>+#define __INTEL_PMDEMAND_H__<br>
>+<br>
>+#include <linux/types.h><br>
>+<br>
>+struct drm_i915_private;<br>
>+struct intel_atomic_state;<br>
>+struct intel_crtc_state;<br>
>+struct intel_plane_state;<br>
>+<br>
>+void intel_pmdemand_init_early(struct drm_i915_private *i915);<br>
>+int intel_pmdemand_init(struct drm_i915_private *i915);<br>
>+void intel_program_dbuf_pmdemand(struct drm_i915_private *i915,<br>
>+ u8 dbuf_slices);<br>
<br>
Maybe rename this to intel_pmdemand_program_dbuf() to be consistent with<br>
the rest of functions exported here?<br>
<br>
Also, since we probably are doing a re-spin of this patch, do you think<br>
we could also take the opportunity to make static functions in<br>
intel_pmdemand.c follow the same naming scheme?<br>
<br>
--<br>
Gustavo Sousa<br>
<br>
>+void intel_pmdemand_pre_plane_update(struct intel_atomic_state *state);<br>
>+void intel_pmdemand_post_plane_update(struct intel_atomic_state *state);<br>
>+int intel_pmdemand_atomic_check(struct intel_atomic_state *state);<br>
>+<br>
>+#endif /* __INTEL_PMDEMAND_H__ */<br>
>diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h<br>
>index 2a9ab8de8421..91fb12b65c92 100644<br>
>--- a/drivers/gpu/drm/i915/i915_reg.h<br>
>+++ b/drivers/gpu/drm/i915/i915_reg.h<br>
>@@ -4450,8 +4450,10 @@<br>
> #define GEN8_DE_MISC_IMR _MMIO(0x44464)<br>
> #define GEN8_DE_MISC_IIR _MMIO(0x44468)<br>
> #define GEN8_DE_MISC_IER _MMIO(0x4446c)<br>
>-#define GEN8_DE_MISC_GSE (1 << 27)<br>
>-#define GEN8_DE_EDP_PSR (1 << 19)<br>
>+#define XELPDP_PMDEMAND_RSPTOUT_ERR REG_BIT(27)<br>
>+#define GEN8_DE_MISC_GSE REG_BIT(27)<br>
>+#define GEN8_DE_EDP_PSR REG_BIT(19)<br>
>+#define XELPDP_PMDEMAND_RSP REG_BIT(3)<br>
> <br>
> #define GEN8_PCU_ISR _MMIO(0x444e0)<br>
> #define GEN8_PCU_IMR _MMIO(0x444e4)<br>
>@@ -4536,6 +4538,33 @@<br>
> #define XELPDP_DP_ALT_HPD_LONG_DETECT REG_BIT(1)<br>
> #define XELPDP_DP_ALT_HPD_SHORT_DETECT REG_BIT(0)<br>
> <br>
>+#define XELPDP_INITIATE_PMDEMAND_REQUEST(dword) _MMIO(0x45230 + 4 * (dword))<br>
>+#define XELPDP_PMDEMAND_QCLK_GV_BW_MASK REG_GENMASK(31, 16)<br>
>+#define XELPDP_PMDEMAND_QCLK_GV_BW(x) REG_FIELD_PREP(XELPDP_PMDEMAND_QCLK_GV_BW_MASK, x)<br>
>+#define XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK REG_GENMASK(14, 12)<br>
>+#define XELPDP_PMDEMAND_VOLTAGE_INDEX(x) REG_FIELD_PREP(XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK, x)<br>
>+#define XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK REG_GENMASK(11, 8)<br>
>+#define XELPDP_PMDEMAND_QCLK_GV_INDEX(x) REG_FIELD_PREP(XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK, x)<br>
>+#define XELPDP_PMDEMAND_PIPES_MASK REG_GENMASK(7, 6)<br>
>+#define XELPDP_PMDEMAND_PIPES(x) REG_FIELD_PREP(XELPDP_PMDEMAND_PIPES_MASK, x)<br>
>+#define XELPDP_PMDEMAND_DBUFS_MASK REG_GENMASK(5, 4)<br>
>+#define XELPDP_PMDEMAND_DBUFS(x) REG_FIELD_PREP(XELPDP_PMDEMAND_DBUFS_MASK, x)<br>
>+#define XELPDP_PMDEMAND_PHYS_MASK REG_GENMASK(2, 0)<br>
>+#define XELPDP_PMDEMAND_PHYS(x) REG_FIELD_PREP(XELPDP_PMDEMAND_PHYS_MASK, x)<br>
>+<br>
>+#define XELPDP_PMDEMAND_REQ_ENABLE REG_BIT(31)<br>
>+#define XELPDP_PMDEMAND_CDCLK_FREQ_MASK REG_GENMASK(30, 20)<br>
>+#define XELPDP_PMDEMAND_CDCLK_FREQ(x) REG_FIELD_PREP(XELPDP_PMDEMAND_CDCLK_FREQ_MASK, x)<br>
>+#define XELPDP_PMDEMAND_DDICLK_FREQ_MASK REG_GENMASK(18, 8)<br>
>+#define XELPDP_PMDEMAND_DDICLK_FREQ(x) REG_FIELD_PREP(XELPDP_PMDEMAND_DDICLK_FREQ_MASK, x)<br>
>+#define XELPDP_PMDEMAND_SCALERS_MASK REG_GENMASK(6, 4)<br>
>+#define XELPDP_PMDEMAND_SCALERS(x) REG_FIELD_PREP(XELPDP_PMDEMAND_SCALERS_MASK, x)<br>
>+#define XELPDP_PMDEMAND_PLLS_MASK REG_GENMASK(2, 0)<br>
>+#define XELPDP_PMDEMAND_PLLS(x) REG_FIELD_PREP(XELPDP_PMDEMAND_PLLS_MASK, x)<br>
>+<br>
>+#define GEN12_DCPR_STATUS_1 _MMIO(0x46440)<br>
>+#define XELPDP_PMDEMAND_INFLIGHT_STATUS REG_BIT(26)<br>
>+<br>
> #define ILK_DISPLAY_CHICKEN2 _MMIO(0x42004)<br>
> /* Required on all Ironlake and Sandybridge according to the B-Spec. */<br>
> #define ILK_ELPIN_409_SELECT REG_BIT(25)<br>
>@@ -4695,6 +4724,9 @@<br>
> #define DCPR_SEND_RESP_IMM REG_BIT(25)<br>
> #define DCPR_CLEAR_MEMSTAT_DIS REG_BIT(24)<br>
> <br>
>+#define XELPD_CHICKEN_DCPR_3 _MMIO(0x46438)<br>
>+#define DMD_RSP_TIMEOUT_DISABLE REG_BIT(19)<br>
>+<br>
> #define SKL_DFSM _MMIO(0x51000)<br>
> #define SKL_DFSM_DISPLAY_PM_DISABLE (1 << 27)<br>
> #define SKL_DFSM_DISPLAY_HDCP_DISABLE (1 << 25)<br>
>-- <br>
>2.34.1<br>
><br>
</div>
</span></font><br>
</div>
</body>
</html>