No subject
Quan, Evan
Evan.Quan at amd.com
Mon Dec 9 01:26:20 UTC 2019
I actually do not see any problem with this change.
1. if smu_read_smc_arg() always return 0, I do not see any meaning to keep "return 0". Making it a "void" API is more reasonable.
2. Making " WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66, msg);" a separate API is ridiculous while " WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90, 0);" and " WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82, param);" did not. Actually these three combined together makes a real "message sending".
Anyway it's fine with me if you guys can live with original poor code.
> -----Original Message-----
> From: Huang Rui <ray.huang at amd.com>
> Sent: Thursday, December 5, 2019 11:01 AM
> To: Wang, Kevin(Yang) <Kevin1.Wang at amd.com>
> Cc: Quan, Evan <Evan.Quan at amd.com>; amd-gfx at lists.freedesktop.org;
> Deucher, Alexander <Alexander.Deucher at amd.com>
> Subject:
>
> Bcc:
> Subject: Re: [PATCH 1/2] drm/amd/powerplay: drop unnecessary API wrapper
> and return value
> Reply-To:
> In-Reply-To:
> <MN2PR12MB32961EFFD79528A4EFF4BF5AA25D0 at MN2PR12MB3296.nampr
> d12.prod.outlook.com>
>
> On Wed, Dec 04, 2019 at 08:41:00PM +0800, Wang, Kevin(Yang) wrote:
> > [AMD Official Use Only - Internal Distribution Only]
> >
> > this change doesn't make sense, and if you really think the return
> > value is useless.
> > It is more reasonable to accept parameters with return value, not
> > parameter.
> > I think these two patches make the code look worse, unless there's a
> > bug in it.
> > add [1]@Huang, Ray double check.
> > Best Regards,
> > Kevin
> >
> >
> ________________________________________________________________
> __
> >
> > From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> on behalf of Evan
> > Quan <evan.quan at amd.com>
> > Sent: Wednesday, December 4, 2019 5:53 PM
> > To: amd-gfx at lists.freedesktop.org <amd-gfx at lists.freedesktop.org>
> > Cc: Quan, Evan <Evan.Quan at amd.com>
> > Subject: [PATCH 1/2] drm/amd/powerplay: drop unnecessary API wrapper
> > and return value
> >
> > Some minor cosmetic fixes.
> > Change-Id: I3ec217289f4cb491720430f2d0b0b4efe5e2b9aa
> > Signed-off-by: Evan Quan <evan.quan at amd.com>
> > ---
> > drivers/gpu/drm/amd/powerplay/amdgpu_smu.c | 12 ++----
> > .../gpu/drm/amd/powerplay/inc/amdgpu_smu.h | 2 +-
> > drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h | 2 +-
> > drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h | 2 +-
> > drivers/gpu/drm/amd/powerplay/smu_v11_0.c | 39 +++++--------------
> > drivers/gpu/drm/amd/powerplay/smu_v12_0.c | 22 ++---------
> > 6 files changed, 19 insertions(+), 60 deletions(-)
> > diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> > b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> > index 2dd960e85a24..00a0df9b41c9 100644
> > --- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> > +++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
> > @@ -198,9 +198,7 @@ int smu_get_smc_version(struct smu_context *smu,
> > uint32_t *if_version, uint32_t
> > if (ret)
> > return ret;
> >
> > - ret = smu_read_smc_arg(smu, if_version);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, if_version);
> > }
> >
> > if (smu_version) {
> > @@ -208,9 +206,7 @@ int smu_get_smc_version(struct smu_context *smu,
> > uint32_t *if_version, uint32_t
> > if (ret)
> > return ret;
> >
> > - ret = smu_read_smc_arg(smu, smu_version);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, smu_version);
> > }
> >
> > return ret;
> > @@ -339,9 +335,7 @@ int smu_get_dpm_freq_by_index(struct
> smu_context
> > *smu, enum smu_clk_type clk_typ
> > if (ret)
> > return ret;
> >
> > - ret = smu_read_smc_arg(smu, ¶m);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, ¶m);
> >
> > /* BIT31: 0 - Fine grained DPM, 1 - Dicrete DPM
> > * now, we un-support it */
> > diff --git a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> > b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> > index ca3fdc6777cf..e7b18b209bc7 100644
> > --- a/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> > +++ b/drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
> > @@ -502,7 +502,7 @@ struct pptable_funcs {
> > int (*system_features_control)(struct smu_context *smu, bool
> > en);
> > int (*send_smc_msg_with_param)(struct smu_context *smu,
> > enum smu_message_type msg,
> > uint32_t param);
> > - int (*read_smc_arg)(struct smu_context *smu, uint32_t *arg);
> > + void (*read_smc_arg)(struct smu_context *smu, uint32_t *arg);
> > int (*init_display_count)(struct smu_context *smu, uint32_t
> > count);
> > int (*set_allowed_mask)(struct smu_context *smu);
> > int (*get_enabled_mask)(struct smu_context *smu, uint32_t
> > *feature_mask, uint32_t num);
> > diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
> > b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
> > index 610e301a5fce..4160147a03f3 100644
> > --- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
> > +++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
> > @@ -183,7 +183,7 @@ smu_v11_0_send_msg_with_param(struct
> smu_context
> > *smu,
> > enum smu_message_type msg,
> > uint32_t param);
> >
> > -int smu_v11_0_read_arg(struct smu_context *smu, uint32_t *arg);
> > +void smu_v11_0_read_arg(struct smu_context *smu, uint32_t *arg);
> >
> > int smu_v11_0_init_display_count(struct smu_context *smu, uint32_t
> > count);
> >
> > diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
> > b/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
> > index 922973b7e29f..710af2860a8f 100644
> > --- a/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
> > +++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
> > @@ -40,7 +40,7 @@ struct smu_12_0_cmn2aisc_mapping {
> > int smu_v12_0_send_msg_without_waiting(struct smu_context *smu,
> > uint16_t msg);
> >
> > -int smu_v12_0_read_arg(struct smu_context *smu, uint32_t *arg);
> > +void smu_v12_0_read_arg(struct smu_context *smu, uint32_t *arg);
> >
> > int smu_v12_0_wait_for_response(struct smu_context *smu);
> >
> > diff --git a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> > b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> > index 8683e0678b56..325ec4864f90 100644
> > --- a/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> > +++ b/drivers/gpu/drm/amd/powerplay/smu_v11_0.c
> > @@ -53,20 +53,11 @@ MODULE_FIRMWARE("amdgpu/navi12_smc.bin");
> >
> > #define SMU11_VOLTAGE_SCALE 4
> >
> > -static int smu_v11_0_send_msg_without_waiting(struct smu_context *smu,
> > - uint16_t msg)
> > -{
> > - struct amdgpu_device *adev = smu->adev;
> > - WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66, msg);
> > - return 0;
> > -}
> > -
> > -int smu_v11_0_read_arg(struct smu_context *smu, uint32_t *arg)
> > +void smu_v11_0_read_arg(struct smu_context *smu, uint32_t *arg)
> > {
> > struct amdgpu_device *adev = smu->adev;
> >
> > *arg = RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82);
> > - return 0;
> > }
> >
> > static int smu_v11_0_wait_for_response(struct smu_context *smu)
> > @@ -109,7 +100,7 @@ smu_v11_0_send_msg_with_param(struct
> smu_context
> > *smu,
> >
> > WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82, param);
> >
> > - smu_v11_0_send_msg_without_waiting(smu, (uint16_t)index);
> > + WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66,
> (uint16_t)index);
> >
> > ret = smu_v11_0_wait_for_response(smu);
> > if (ret)
> > @@ -843,16 +834,12 @@ int smu_v11_0_get_enabled_mask(struct
> smu_context
> > *smu,
> > ret = smu_send_smc_msg(smu,
> > SMU_MSG_GetEnabledSmuFeaturesHigh);
> > if (ret)
> > return ret;
> > - ret = smu_read_smc_arg(smu, &feature_mask_high);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, &feature_mask_high);
> >
> > ret = smu_send_smc_msg(smu,
> SMU_MSG_GetEnabledSmuFeaturesLow);
> > if (ret)
> > return ret;
> > - ret = smu_read_smc_arg(smu, &feature_mask_low);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, &feature_mask_low);
> >
> > feature_mask[0] = feature_mask_low;
> > feature_mask[1] = feature_mask_high;
> > @@ -924,9 +911,7 @@ smu_v11_0_get_max_sustainable_clock(struct
> > smu_context *smu, uint32_t *clock,
> > return ret;
> > }
> >
> > - ret = smu_read_smc_arg(smu, clock);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, clock);
> >
> > if (*clock != 0)
> > return 0;
> > @@ -939,7 +924,7 @@ smu_v11_0_get_max_sustainable_clock(struct
> > smu_context *smu, uint32_t *clock,
> > return ret;
> > }
> >
> > - ret = smu_read_smc_arg(smu, clock);
> > + smu_read_smc_arg(smu, clock);
> >
> > return ret;
> > }
> > @@ -1107,9 +1092,7 @@ int smu_v11_0_get_current_clk_freq(struct
> > smu_context *smu,
> > if (ret)
> > return ret;
> >
> > - ret = smu_read_smc_arg(smu, &freq);
> > - if (ret)
> > - return ret;
> > + smu_read_smc_arg(smu, &freq);
> > }
> >
> > freq *= 100;
> > @@ -1749,18 +1732,14 @@ int smu_v11_0_get_dpm_ultimate_freq(struct
> > smu_context *smu, enum smu_clk_type c
> > ret = smu_send_smc_msg_with_param(smu,
> > SMU_MSG_GetMaxDpmFreq, param);
> > if (ret)
> > goto failed;
> > - ret = smu_read_smc_arg(smu, max);
> > - if (ret)
> > - goto failed;
> > + smu_read_smc_arg(smu, max);
> > }
> >
> > if (min) {
> > ret = smu_send_smc_msg_with_param(smu,
> > SMU_MSG_GetMinDpmFreq, param);
> > if (ret)
> > goto failed;
> > - ret = smu_read_smc_arg(smu, min);
> > - if (ret)
> > - goto failed;
> > + smu_read_smc_arg(smu, min);
> > }
> >
> > failed:
> > diff --git a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
> > b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
> > index 269a7d73b58d..7f5f7e12a41e 100644
> > --- a/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
> > +++ b/drivers/gpu/drm/amd/powerplay/smu_v12_0.c
> > @@ -41,21 +41,11 @@
> > #define SMUIO_GFX_MISC_CNTL__PWR_GFXOFF_STATUS_MASK
> > 0x00000006L
> > #define SMUIO_GFX_MISC_CNTL__PWR_GFXOFF_STATUS__SHIFT 0x1
> >
> > -int smu_v12_0_send_msg_without_waiting(struct smu_context *smu,
> > - uint16_t msg)
> > -{
> > - struct amdgpu_device *adev = smu->adev;
> > -
> > - WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66, msg);
> > - return 0;
> > -}
> > -
> > -int smu_v12_0_read_arg(struct smu_context *smu, uint32_t *arg)
> > +void smu_v12_0_read_arg(struct smu_context *smu, uint32_t *arg)
> > {
> > struct amdgpu_device *adev = smu->adev;
> >
> > *arg = RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82);
> > - return 0;
> > }
> >
> > int smu_v12_0_wait_for_response(struct smu_context *smu)
> > @@ -98,7 +88,7 @@ smu_v12_0_send_msg_with_param(struct
> smu_context
> > *smu,
> >
> > WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_82, param);
> >
> > - smu_v12_0_send_msg_without_waiting(smu, (uint16_t)index);
> > + WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_66,
> (uint16_t)index);
>
> smu_v12_0_send_msg_without_waiting() function is more readable than using
> raw register programming.
>
> Thanks,
> Ray
>
> >
> > ret = smu_v12_0_wait_for_response(smu);
> > if (ret)
> > @@ -352,9 +342,7 @@ int smu_v12_0_get_dpm_ultimate_freq(struct
> > smu_context *smu, enum smu_clk_type c
> > pr_err("Attempt to get max GX
> > frequency from SMC Failed !\n");
> > goto failed;
> > }
> > - ret = smu_read_smc_arg(smu, max);
> > - if (ret)
> > - goto failed;
> > + smu_read_smc_arg(smu, max);
> > break;
> > case SMU_UCLK:
> > case SMU_FCLK:
> > @@ -383,9 +371,7 @@ int smu_v12_0_get_dpm_ultimate_freq(struct
> > smu_context *smu, enum smu_clk_type c
> > pr_err("Attempt to get min GX
> > frequency from SMC Failed !\n");
> > goto failed;
> > }
> > - ret = smu_read_smc_arg(smu, min);
> > - if (ret)
> > - goto failed;
> > + smu_read_smc_arg(smu, min);
> > break;
> > case SMU_UCLK:
> > case SMU_FCLK:
> > --
> > 2.24.0
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx at lists.freedesktop.org
> > [2]https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fli
> > sts.freedesktop.org%2Fmailman%2Flistinfo%2Famd-
> gfx&data=02%7C01%7CK
> >
> evin1.Wang%40amd.com%7Cb2381beaed6e4f83074608d7789fe6ef%7C3dd896
> 1fe4884
> >
> e608e11a82d994e183d%7C0%7C0%7C637110500489978071&sdata=U15c
> qXp2n00L
> > RZDeu2482cwoZmEIrXWHCgF4NFap%2BkQ%3D&reserved=0
> >
> > References
> >
> > 1. mailto:Ray.Huang at amd.com
> > 2.
> > https://nam11.safelinks.protection.outlook.com/?url=https://lists.free
> > desktop.org/mailman/listinfo/amd-
> gfx&data=02|01|Kevin1.Wang at amd.co
> >
> m|b2381beaed6e4f83074608d7789fe6ef|3dd8961fe4884e608e11a82d994e183
> d|0|
> >
> 0|637110500489978071&sdata=U15cqXp2n00LRZDeu2482cwoZmEIrXWH
> CgF4NFa
> > p+kQ=&reserved=0
More information about the amd-gfx
mailing list