[PATCH] drm/amd/pm: fix and simplify workload handling
Alex Deucher
alexdeucher at gmail.com
Tue Nov 12 14:23:18 UTC 2024
On Tue, Nov 12, 2024 at 12:44 AM Feng, Kenneth <Kenneth.Feng at amd.com> wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> Hi Alex,
> If I understand this patch correctly, the sysfs end user will only see
> his/her settings to the power profile since the smu->power_profile_mode is
> reflecting the end user's settings.
> Then if the other components set the workload mask then
> smu->power_profile_mode can't reflect the real prioritized workload. If the
> end user doesn't need to know this information,
> then it's ok. In addition, there might be one problem, please see comments
> inline.
>
The problem is that when users play videos and games at the same time or
run ROCm apps and games at the same time, sysfs reflects the last selected
workload profile. This is confusing for users and it does not align with
how the firmware works. We already have bugs filed because playing back a
video while gaming shows the profile as VIDEO which users assume will be
wrong and impact their gaming experience. The workload hint is a bit mask
and all of the currently active workloads should be set when they are
active otherwise mixing workloads could have a negative effect on
performance. E.g., if you video playback and gaming you should get both
the FS3D and VIDEO workload bits set and the PMFW will arbitrate between
them.
> Thanks.
>
> -----Original Message-----
> From: Deucher, Alexander <Alexander.Deucher at amd.com>
> Sent: Saturday, November 9, 2024 1:32 PM
> To: amd-gfx at lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher at amd.com>; Feng, Kenneth <
> Kenneth.Feng at amd.com>; Lazar, Lijo <Lijo.Lazar at amd.com>
> Subject: [PATCH] drm/amd/pm: fix and simplify workload handling
>
> smu->workload_mask is IP specific and should not be messed with in
> the common code. The mask bits vary across SMU versions.
>
> Move all handling of smu->workload_mask in to the backends and simplify
> the code. Store the user's preference in smu->power_profile_mode which
> will be reflected in sysfs. For internal driver profile switches for KFD
> or VCN, just update the workload mask so that the user's preference is
> retained. Remove all of the extra now unused workload related elements in
> the smu structure.
>
> Fixes: 8cc438be5d49 ("drm/amd/pm: correct the workload setting")
> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
> Cc: Kenneth Feng <kenneth.feng at amd.com>
> Cc: Lijo Lazar <lijo.lazar at amd.com>
> ---
> drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 108 ++++++------------
> drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h | 11 +-
> .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c | 20 ++--
> .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c | 20 ++--
> .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c | 21 ++--
> .../gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c | 17 +--
> .../gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c | 17 +--
> .../drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c | 33 +++---
> .../drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c | 21 ++--
> .../drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c | 24 ++--
> drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c | 8 --
> drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h | 2 -
> 12 files changed, 132 insertions(+), 170 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> index c3a6b6f20455..162a3289855c 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> @@ -1268,9 +1268,6 @@ static int smu_sw_init(struct amdgpu_ip_block
> *ip_block)
> INIT_WORK(&smu->interrupt_work, smu_interrupt_work_fn);
> atomic64_set(&smu->throttle_int_counter, 0);
> smu->watermarks_bitmap = 0;
> - smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
> - smu->default_power_profile_mode =
> PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
> - smu->user_dpm_profile.user_workload_mask = 0;
>
> for (i = 0; i < adev->vcn.num_vcn_inst; i++)
> atomic_set(&smu->smu_power.power_gate.vcn_gated[i], 1); @@
> -1278,33 +1275,12 @@ static int smu_sw_init(struct amdgpu_ip_block
> *ip_block)
> atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
> atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
>
> - smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_VR] = 4;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
> - smu->workload_priority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
> -
> if (smu->is_apu ||
> - !smu_is_workload_profile_available(smu,
> PP_SMC_POWER_PROFILE_FULLSCREEN3D)) {
> - smu->driver_workload_mask =
> - 1 <<
> smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
> - } else {
> - smu->driver_workload_mask =
> - 1 <<
> smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
> - smu->default_power_profile_mode =
> PP_SMC_POWER_PROFILE_FULLSCREEN3D;
> - }
> -
> - smu->workload_mask = smu->driver_workload_mask |
> -
> smu->user_dpm_profile.user_workload_mask;
> - smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
> - smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
> - smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
> - smu->workload_setting[3] = PP_SMC_POWER_PROFILE_VIDEO;
> - smu->workload_setting[4] = PP_SMC_POWER_PROFILE_VR;
> - smu->workload_setting[5] = PP_SMC_POWER_PROFILE_COMPUTE;
> - smu->workload_setting[6] = PP_SMC_POWER_PROFILE_CUSTOM;
> + !smu_is_workload_profile_available(smu,
> PP_SMC_POWER_PROFILE_FULLSCREEN3D))
> + smu->power_profile_mode =
> PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
> + else
> + smu->power_profile_mode =
> PP_SMC_POWER_PROFILE_FULLSCREEN3D;
> +
> smu->display_config = &adev->pm.pm_display_cfg;
>
> smu->smu_dpm.dpm_level = AMD_DPM_FORCED_LEVEL_AUTO; @@ -2252,24
> +2228,23 @@ static int smu_enable_umd_pstate(void *handle, }
>
> static int smu_bump_power_profile_mode(struct smu_context *smu,
> - long *param,
> - uint32_t param_size)
> + long *param,
> + uint32_t param_size,
> + bool enable)
> {
> int ret = 0;
>
> if (smu->ppt_funcs->set_power_profile_mode)
> - ret = smu->ppt_funcs->set_power_profile_mode(smu, param,
> param_size);
> + ret = smu->ppt_funcs->set_power_profile_mode(smu, param,
> param_size,
> +enable);
>
> return ret;
> }
>
> static int smu_adjust_power_state_dynamic(struct smu_context *smu,
> enum amd_dpm_forced_level level,
> - bool skip_display_settings,
> - bool init)
> + bool skip_display_settings)
> {
> int ret = 0;
> - int index = 0;
> long workload[1];
> struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
>
> @@ -2307,13 +2282,10 @@ static int smu_adjust_power_state_dynamic(struct
> smu_context *smu,
> }
>
> if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
> - smu_dpm_ctx->dpm_level !=
> AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
> - index = fls(smu->workload_mask);
> - index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index
> - 1 : 0;
> - workload[0] = smu->workload_setting[index];
> + smu_dpm_ctx->dpm_level !=
> AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
> + workload[0] = smu->power_profile_mode;
>
> - if (init || smu->power_profile_mode != workload[0])
> - smu_bump_power_profile_mode(smu, workload, 0);
> + smu_bump_power_profile_mode(smu, workload, 0, true);
> }
> #[Kenneth Feng]
> #After some OD settings, the workload will go back to the user's setting
> due to wokload[0] = smu->power_profile_mode.
> #is there a scenario that the compute workload is set by kfd before the OD
> setting, then the compute workload setting is missing
> #after the OD setting?
>
I see what you mean. I think we need to refcount the selected workload
types and keep them set until the ref count goes to 0.
Alex
>
> return ret;
> @@ -2333,13 +2305,13 @@ static int smu_handle_task(struct smu_context *smu,
> ret = smu_pre_display_config_changed(smu);
> if (ret)
> return ret;
> - ret = smu_adjust_power_state_dynamic(smu, level, false,
> false);
> + ret = smu_adjust_power_state_dynamic(smu, level, false);
> break;
> case AMD_PP_TASK_COMPLETE_INIT:
> - ret = smu_adjust_power_state_dynamic(smu, level, true,
> true);
> + ret = smu_adjust_power_state_dynamic(smu, level, true);
> break;
> case AMD_PP_TASK_READJUST_POWER_STATE:
> - ret = smu_adjust_power_state_dynamic(smu, level, true,
> false);
> + ret = smu_adjust_power_state_dynamic(smu, level, true);
> break;
> default:
> break;
> @@ -2361,12 +2333,11 @@ static int smu_handle_dpm_task(void *handle,
>
> static int smu_switch_power_profile(void *handle,
> enum PP_SMC_POWER_PROFILE type,
> - bool en)
> + bool enable)
> {
> struct smu_context *smu = handle;
> struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
> long workload[1];
> - uint32_t index;
>
> if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
> return -EOPNOTSUPP;
> @@ -2374,24 +2345,15 @@ static int smu_switch_power_profile(void *handle,
> if (!(type < PP_SMC_POWER_PROFILE_CUSTOM))
> return -EINVAL;
>
> - if (!en) {
> - smu->driver_workload_mask &= ~(1 <<
> smu->workload_priority[type]);
> - index = fls(smu->workload_mask);
> - index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index
> - 1 : 0;
> - workload[0] = smu->workload_setting[index];
> - } else {
> - smu->driver_workload_mask |= (1 <<
> smu->workload_priority[type]);
> - index = fls(smu->workload_mask);
> - index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
> - workload[0] = smu->workload_setting[index];
> - }
> + /* don't disable the user's preference */
> + if (!enable && type == smu->power_profile_mode)
> + return 0;
>
> - smu->workload_mask = smu->driver_workload_mask |
> -
> smu->user_dpm_profile.user_workload_mask;
> + workload[0] = type;
>
> if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
> - smu_dpm_ctx->dpm_level !=
> AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
> - smu_bump_power_profile_mode(smu, workload, 0);
> + smu_dpm_ctx->dpm_level !=
> AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
> + smu_bump_power_profile_mode(smu, workload, 0, enable);
>
> return 0;
> }
> @@ -3090,21 +3052,25 @@ static int smu_set_power_profile_mode(void *handle,
> uint32_t param_size)
> {
> struct smu_context *smu = handle;
> - int ret;
> + long workload[1];
> + int ret = 0;
>
> if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
> !smu->ppt_funcs->set_power_profile_mode)
> return -EOPNOTSUPP;
>
> - if (smu->user_dpm_profile.user_workload_mask &
> - (1 << smu->workload_priority[param[param_size]]))
> - return 0;
> -
> - smu->user_dpm_profile.user_workload_mask =
> - (1 << smu->workload_priority[param[param_size]]);
> - smu->workload_mask = smu->user_dpm_profile.user_workload_mask |
> - smu->driver_workload_mask;
> - ret = smu_bump_power_profile_mode(smu, param, param_size);
> + if (param[param_size] != smu->power_profile_mode) {
> + /* clear the old user preference */
> + workload[0] = smu->power_profile_mode;
> + ret = smu_bump_power_profile_mode(smu, workload, 0, false);
> + if (ret)
> + return ret;
> + /* set the new user preference */
> + ret = smu_bump_power_profile_mode(smu, param, param_size,
> true);
> + if (!ret)
> + /* store the user's preference */
> + smu->power_profile_mode = param[param_size];
> + }
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> index fa93a8879113..cd2db06d752b 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> +++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> @@ -240,7 +240,6 @@ struct smu_user_dpm_profile {
> /* user clock state information */
> uint32_t clk_mask[SMU_CLK_COUNT];
> uint32_t clk_dependency;
> - uint32_t user_workload_mask;
> };
>
> #define SMU_TABLE_INIT(tables, table_id, s, a, d) \
> @@ -557,12 +556,10 @@ struct smu_context {
> uint32_t hard_min_uclk_req_from_dal;
> bool disable_uclk_switch;
>
> + /* backend specific workload mask */
> uint32_t workload_mask;
> - uint32_t driver_workload_mask;
> - uint32_t workload_priority[WORKLOAD_POLICY_MAX];
> - uint32_t workload_setting[WORKLOAD_POLICY_MAX];
> + /* default/user workload preference */
> uint32_t power_profile_mode;
> - uint32_t default_power_profile_mode;
> bool pm_enabled;
> bool is_apu;
>
> @@ -734,8 +731,10 @@ struct pptable_funcs {
> * create/set custom power profile modes.
> * &input: Power profile mode parameters.
> * &size: Size of &input.
> + * &enable: enable/disable the profile
> */
> - int (*set_power_profile_mode)(struct smu_context *smu, long
> *input, uint32_t size);
> + int (*set_power_profile_mode)(struct smu_context *smu, long *input,
> + uint32_t size, bool enable);
>
> /**
> * @dpm_set_vcn_enable: Enable/disable VCN engine dynamic power
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> index 4b36c230e43a..1e44cf6fec4b 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> @@ -1443,7 +1443,8 @@ static int arcturus_get_power_profile_mode(struct
> smu_context *smu,
>
> static int arcturus_set_power_profile_mode(struct smu_context *smu,
> long *input,
> - uint32_t size)
> + uint32_t size,
> + bool enable)
> {
> DpmActivityMonitorCoeffInt_t activity_monitor;
> int workload_type = 0;
> @@ -1455,8 +1456,9 @@ static int arcturus_set_power_profile_mode(struct
> smu_context *smu,
> return -EINVAL;
> }
>
> - if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
> - (smu->smc_fw_version >= 0x360d00)) {
> + if (enable &&
> + (profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
> + (smu->smc_fw_version >= 0x360d00)) {
> if (size != 10)
> return -EINVAL;
>
> @@ -1520,18 +1522,18 @@ static int arcturus_set_power_profile_mode(struct
> smu_context *smu,
> return -EINVAL;
> }
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu,
> SMU_MSG_SetWorkloadMask,
> smu->workload_mask,
> NULL);
> - if (ret) {
> + if (ret)
> dev_err(smu->adev->dev, "Fail to set workload type %d\n",
> workload_type);
> - return ret;
> - }
> -
> - smu_cmn_assign_power_profile(smu);
>
> - return 0;
> + return ret;
> }
>
> static int arcturus_set_performance_level(struct smu_context *smu, diff
> --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> index 211635dabed8..d944a9f954d0 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> @@ -2006,19 +2006,19 @@ static int navi10_get_power_profile_mode(struct
> smu_context *smu, char *buf)
> return size;
> }
>
> -static int navi10_set_power_profile_mode(struct smu_context *smu, long
> *input, uint32_t size)
> +static int navi10_set_power_profile_mode(struct smu_context *smu, long
> *input,
> + uint32_t size, bool enable)
> {
> DpmActivityMonitorCoeffInt_t activity_monitor;
> int workload_type, ret = 0;
> + uint32_t profile_mode = input[size];
>
> - smu->power_profile_mode = input[size];
> -
> - if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
> - dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> smu->power_profile_mode);
> + if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
> + dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> +profile_mode);
> return -EINVAL;
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> + if (enable && profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> if (size != 10)
> return -EINVAL;
>
> @@ -2080,16 +2080,18 @@ static int navi10_set_power_profile_mode(struct
> smu_context *smu, long *input, u
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
> workload_type = smu_cmn_to_asic_specific_index(smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> smu->power_profile_mode);
> + profile_mode);
> if (workload_type < 0)
> return -EINVAL;
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
> smu->workload_mask, NULL);
> if (ret)
> dev_err(smu->adev->dev, "[%s] Failed to set work load
> mask!", __func__);
> - else
> - smu_cmn_assign_power_profile(smu);
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> index 844532a9b641..4967e087088b 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> @@ -1704,22 +1704,23 @@ static int
> sienna_cichlid_get_power_profile_mode(struct smu_context *smu, char *
> return size;
> }
>
> -static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu,
> long *input, uint32_t size)
> +static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu,
> + long *input, uint32_t
> size,
> + bool enable)
> {
>
> DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
> DpmActivityMonitorCoeffInt_t *activity_monitor =
> &(activity_monitor_external.DpmActivityMonitorCoeffInt);
> + uint32_t profile_mode = input[size];
> int workload_type, ret = 0;
>
> - smu->power_profile_mode = input[size];
> -
> - if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
> - dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> smu->power_profile_mode);
> + if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
> + dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> +profile_mode);
> return -EINVAL;
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> + if (enable && profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> if (size != 10)
> return -EINVAL;
>
> @@ -1781,16 +1782,18 @@ static int
> sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
> workload_type = smu_cmn_to_asic_specific_index(smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> smu->power_profile_mode);
> + profile_mode);
> if (workload_type < 0)
> return -EINVAL;
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
> smu->workload_mask, NULL);
> if (ret)
> dev_err(smu->adev->dev, "[%s] Failed to set work load
> mask!", __func__);
> - else
> - smu_cmn_assign_power_profile(smu);
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
> index f89c487dce72..b5dba4826f81 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
> @@ -1056,7 +1056,8 @@ static int vangogh_get_power_profile_mode(struct
> smu_context *smu,
> return size;
> }
>
> -static int vangogh_set_power_profile_mode(struct smu_context *smu, long
> *input, uint32_t size)
> +static int vangogh_set_power_profile_mode(struct smu_context *smu, long
> *input,
> + uint32_t size, bool enable)
> {
> int workload_type, ret;
> uint32_t profile_mode = input[size];
> @@ -1067,7 +1068,7 @@ static int vangogh_set_power_profile_mode(struct
> smu_context *smu, long *input,
> }
>
> if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
> - profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
> + profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
> return 0;
>
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ @@
> -1080,18 +1081,18 @@ static int vangogh_set_power_profile_mode(struct
> smu_context *smu, long *input,
> return -EINVAL;
> }
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu,
> SMU_MSG_ActiveProcessNotify,
> smu->workload_mask,
> NULL);
> - if (ret) {
> + if (ret)
> dev_err_once(smu->adev->dev, "Fail to set workload type
> %d\n",
> workload_type);
> - return ret;
> - }
> -
> - smu_cmn_assign_power_profile(smu);
>
> - return 0;
> + return ret;
> }
>
> static int vangogh_set_soft_freq_limited_range(struct smu_context *smu,
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
> index 75a9ea87f419..2d1eae79ab9d 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
> @@ -864,7 +864,8 @@ static int renoir_force_clk_levels(struct smu_context
> *smu,
> return ret;
> }
>
> -static int renoir_set_power_profile_mode(struct smu_context *smu, long
> *input, uint32_t size)
> +static int renoir_set_power_profile_mode(struct smu_context *smu, long
> *input,
> + uint32_t size, bool enable)
> {
> int workload_type, ret;
> uint32_t profile_mode = input[size];
> @@ -875,7 +876,7 @@ static int renoir_set_power_profile_mode(struct
> smu_context *smu, long *input, u
> }
>
> if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
> - profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
> + profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
> return 0;
>
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ @@
> -891,17 +892,17 @@ static int renoir_set_power_profile_mode(struct
> smu_context *smu, long *input, u
> return -EINVAL;
> }
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu,
> SMU_MSG_ActiveProcessNotify,
> smu->workload_mask,
> NULL);
> - if (ret) {
> + if (ret)
> dev_err_once(smu->adev->dev, "Fail to set workload type
> %d\n", workload_type);
> - return ret;
> - }
>
> - smu_cmn_assign_power_profile(smu);
> -
> - return 0;
> + return ret;
> }
>
> static int renoir_set_peak_clock_by_device(struct smu_context *smu) diff
> --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> index 80c6b1e523aa..3cc734331891 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> @@ -2573,22 +2573,22 @@ static int
> smu_v13_0_0_get_power_profile_mode(struct smu_context *smu,
>
> static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
> long *input,
> - uint32_t size)
> + uint32_t size,
> + bool enable)
> {
> DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
> DpmActivityMonitorCoeffInt_t *activity_monitor =
> &(activity_monitor_external.DpmActivityMonitorCoeffInt);
> + uint32_t profile_mode = input[size];
> int workload_type, ret = 0;
> u32 workload_mask;
>
> - smu->power_profile_mode = input[size];
> -
> - if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
> - dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> smu->power_profile_mode);
> + if (profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
> + dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> +profile_mode);
> return -EINVAL;
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> + if (enable && profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> if (size != 9)
> return -EINVAL;
>
> @@ -2641,13 +2641,18 @@ static int
> smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
> workload_type = smu_cmn_to_asic_specific_index(smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> smu->power_profile_mode);
> + profile_mode);
>
> if (workload_type < 0)
> return -EINVAL;
>
> workload_mask = 1 << workload_type;
>
> + if (enable)
> + smu->workload_mask |= workload_mask;
> + else
> + smu->workload_mask &= ~workload_mask;
> +
> /* Add optimizations for SMU13.0.0/10. Reuse the power saving
> profile */
> if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13,
> 0, 0) &&
> ((smu->adev->pm.fw_version == 0x004e6601) || @@ -2658,25
> +2663,13 @@ static int smu_v13_0_0_set_power_profile_mode(struct
> smu_context *smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
>
> PP_SMC_POWER_PROFILE_POWERSAVING);
> if (workload_type >= 0)
> - workload_mask |= 1 << workload_type;
> + smu->workload_mask |= 1 << workload_type;
> }
>
> - smu->workload_mask |= workload_mask;
> ret = smu_cmn_send_smc_msg_with_param(smu,
> SMU_MSG_SetWorkloadMask,
> smu->workload_mask,
> NULL);
> - if (!ret) {
> - smu_cmn_assign_power_profile(smu);
> - if (smu->power_profile_mode ==
> PP_SMC_POWER_PROFILE_POWERSAVING) {
> - workload_type = smu_cmn_to_asic_specific_index(smu,
> -
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> PP_SMC_POWER_PROFILE_FULLSCREEN3D);
> - smu->power_profile_mode = smu->workload_mask & (1
> << workload_type)
> -
> ? PP_SMC_POWER_PROFILE_FULLSCREEN3D
> -
> : PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
> - }
> - }
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
> index c5d3e25cc967..1aafd23857f0 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
> @@ -2528,22 +2528,23 @@ do {
> \
> return result;
> }
>
> -static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu,
> long *input, uint32_t size)
> +static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu,
> + long *input, uint32_t size,
> + bool enable)
> {
>
> DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
> DpmActivityMonitorCoeffInt_t *activity_monitor =
> &(activity_monitor_external.DpmActivityMonitorCoeffInt);
> + uint32_t profile_mode = input[size];
> int workload_type, ret = 0;
>
> - smu->power_profile_mode = input[size];
> -
> - if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_WINDOW3D) {
> - dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> smu->power_profile_mode);
> + if (profile_mode > PP_SMC_POWER_PROFILE_WINDOW3D) {
> + dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> +profile_mode);
> return -EINVAL;
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> + if (enable && profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> if (size != 8)
> return -EINVAL;
>
> @@ -2590,17 +2591,19 @@ static int
> smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
> workload_type = smu_cmn_to_asic_specific_index(smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> smu->power_profile_mode);
> + profile_mode);
> if (workload_type < 0)
> return -EINVAL;
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
> smu->workload_mask, NULL);
>
> if (ret)
> dev_err(smu->adev->dev, "[%s] Failed to set work load
> mask!", __func__);
> - else
> - smu_cmn_assign_power_profile(smu);
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
> index 59b369eff30f..695480833603 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
> @@ -1719,21 +1719,22 @@ static int
> smu_v14_0_2_get_power_profile_mode(struct smu_context *smu,
>
> static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
> long *input,
> - uint32_t size)
> + uint32_t size,
> + bool enable)
> {
> DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
> DpmActivityMonitorCoeffInt_t *activity_monitor =
> &(activity_monitor_external.DpmActivityMonitorCoeffInt);
> + uint32_t profile_mode = input[size];
> int workload_type, ret = 0;
> uint32_t current_profile_mode = smu->power_profile_mode;
> - smu->power_profile_mode = input[size];
>
> - if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
> - dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> smu->power_profile_mode);
> + if (profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
> + dev_err(smu->adev->dev, "Invalid power profile mode %d\n",
> +profile_mode);
> return -EINVAL;
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> + if (enable && profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
> if (size != 9)
> return -EINVAL;
>
> @@ -1783,7 +1784,7 @@ static int smu_v14_0_2_set_power_profile_mode(struct
> smu_context *smu,
> }
> }
>
> - if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
> + if (profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
> smu_v14_0_deep_sleep_control(smu, false);
> else if (current_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
> smu_v14_0_deep_sleep_control(smu, true); @@ -1791,15
> +1792,16 @@ static int smu_v14_0_2_set_power_profile_mode(struct
> smu_context *smu,
> /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
> workload_type = smu_cmn_to_asic_specific_index(smu,
>
> CMN2ASIC_MAPPING_WORKLOAD,
> -
> smu->power_profile_mode);
> + profile_mode);
> if (workload_type < 0)
> return -EINVAL;
>
> + if (enable)
> + smu->workload_mask |= (1 << workload_type);
> + else
> + smu->workload_mask &= ~(1 << workload_type);
> ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
> -
> smu->workload_mask, NULL);
> -
> - if (!ret)
> - smu_cmn_assign_power_profile(smu);
> + smu->workload_mask, NULL);
>
> return ret;
> }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
> b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
> index fd2aa949538e..63c4f75fa118 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
> @@ -1141,14 +1141,6 @@ int smu_cmn_set_mp1_state(struct smu_context *smu,
> return ret;
> }
>
> -void smu_cmn_assign_power_profile(struct smu_context *smu) -{
> - uint32_t index;
> - index = fls(smu->workload_mask);
> - index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
> - smu->power_profile_mode = smu->workload_setting[index];
> -}
> -
> bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev) {
> struct pci_dev *p = NULL;
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
> b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
> index 8a801e389659..1de685defe85 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
> @@ -130,8 +130,6 @@ void smu_cmn_init_soft_gpu_metrics(void *table,
> uint8_t frev, uint8_t crev); int smu_cmn_set_mp1_state(struct smu_context
> *smu,
> enum pp_mp1_state mp1_state);
>
> -void smu_cmn_assign_power_profile(struct smu_context *smu);
> -
> /*
> * Helper function to make sysfs_emit_at() happy. Align buf to
> * the current page boundary and record the offset.
> --
> 2.47.0
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20241112/79175bbb/attachment-0001.htm>
More information about the amd-gfx
mailing list