[PATCH 2/2] drm/amd/pp: Add stable Pstate clk display when print_clock_levels
Russell, Kent
Kent.Russell at amd.com
Mon Jan 8 20:22:34 UTC 2018
Sorry, I just re-read and saw that this will be in sclk and mclk, not just mclk. So will this have a direct impact on any other functionality, or will it just disable clock/power gating and prevent the clocks from changing?
Kent
-----Original Message-----
From: Russell, Kent
Sent: Monday, January 08, 2018 2:52 PM
To: 'Alex Deucher'; Kuehling, Felix
Cc: amd-gfx list
Subject: RE: [PATCH 2/2] drm/amd/pp: Add stable Pstate clk display when print_clock_levels
And yes, it will confuse the SMI in a couple functions, but thankfully it's nice enough that I can fix it with a couple lines.
So just to make sure that I understand this correctly, if a user were to echo P to the pp_dpm_mclk file, it would disable clockgating and powergating and keep the memory clocks at a stable level (likely level 0)? And this would have no impact on the SCLK or Power Profile or anything else, just the mclk? Just asking so I know how to implement it in the SMI. Thanks!
Kent
-----Original Message-----
From: Alex Deucher [mailto:alexdeucher at gmail.com]
Sent: Monday, January 08, 2018 2:35 PM
To: Kuehling, Felix
Cc: amd-gfx list; Russell, Kent
Subject: Re: [PATCH 2/2] drm/amd/pp: Add stable Pstate clk display when print_clock_levels
On Mon, Jan 8, 2018 at 2:20 PM, Felix Kuehling <felix.kuehling at amd.com> wrote:
> [+Kent]
>
> What does stable pstate mean? What is it used for?
This is used for the profiling stuff in force_performance_level. It disables clock and power gating and sets stable clock levels for doing performance profiling.
Alex
>
> Hi Kent,
>
> Is this going to confuse rocm_smi?
>
> Regards,
> Felix
>
>
> On 2018-01-08 04:57 AM, Rex Zhu wrote:
>> The additional output are at the end of sclk/mclk info as cat
>> pp_dpm_mclk
>> 0: 300Mhz *
>> 1: 1650Mhz
>> P: 300Mhz
>>
>> Signed-off-by: Rex Zhu <Rex.Zhu at amd.com>
>>
>> Change-Id: Idf8eeedb5d399d9ffb7de7a2fb2976c7fa7c01a8
>> ---
>> drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c | 2 ++
>> drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c | 2 ++
>> drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 2 ++
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 2 ++
>> 4 files changed, 8 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>> b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>> index f68dd08..03dfba0 100644
>> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
>> @@ -1601,6 +1601,7 @@ static int cz_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> i, sclk_table->entries[i].clk / 100,
>> (i == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_sclk/100);
>> break;
>> case PP_MCLK:
>> now =
>> PHM_GET_FIELD(cgs_read_ind_register(hwmgr->device,
>> @@ -1613,6 +1614,7 @@ static int cz_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> CZ_NUM_NBPMEMORYCLOCK-i, data->sys_info.nbp_memory_clock[i-1] / 100,
>> (CZ_NUM_NBPMEMORYCLOCK-i ==
>> now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_mclk/100);
>> break;
>> default:
>> break;
>> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>> b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>> index 409a56b..88c6ad8 100644
>> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/rv_hwmgr.c
>> @@ -756,6 +756,7 @@ static int rv_print_clock_levels(struct pp_hwmgr *hwmgr,
>> data->gfx_max_freq_limit / 100,
>> ((data->gfx_max_freq_limit / 100)
>> == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_sclk/100);
>> break;
>> case PP_MCLK:
>> PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr,
>> @@ -773,6 +774,7 @@ static int rv_print_clock_levels(struct pp_hwmgr *hwmgr,
>> mclk_table->entries[i].clk / 100,
>> ((mclk_table->entries[i].clk / 100)
>> == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_mclk/100);
>> break;
>> default:
>> break;
>> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>> index 72031bd..1bdcd86 100644
>> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
>> @@ -4372,6 +4372,7 @@ static int smu7_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> i, sclk_table->dpm_levels[i].value / 100,
>> (i == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_sclk/100);
>> break;
>> case PP_MCLK:
>> smum_send_msg_to_smc(hwmgr,
>> PPSMC_MSG_API_GetMclkFrequency); @@ -4388,6 +4389,7 @@ static int smu7_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> i, mclk_table->dpm_levels[i].value / 100,
>> (i == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_mclk/100);
>> break;
>> case PP_PCIE:
>> pcie_speed = smu7_get_current_pcie_speed(hwmgr);
>> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>> b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>> index cb35f4f..cab50fc 100644
>> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>> @@ -4565,6 +4565,7 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> i, sclk_table->dpm_levels[i].value / 100,
>> (i == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_sclk/100);
>> break;
>> case PP_MCLK:
>> if (data->registry_data.mclk_dpm_key_disabled)
>> @@ -4583,6 +4584,7 @@ static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
>> size += sprintf(buf + size, "%d: %uMhz %s\n",
>> i, mclk_table->dpm_levels[i].value / 100,
>> (i == now) ? "*" : "");
>> + size += sprintf(buf + size, "P: %uMhz\n",
>> + hwmgr->pstate_mclk/100);
>> break;
>> case PP_PCIE:
>> PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr,
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list