[PATCH 2/3] drm/amd/pp: Refine get_gpu_power for VI

Eric Huang jinhuieric.huang at amd.com
Wed Apr 4 16:00:02 UTC 2018


So you didn't compare the result with AGT, but I did. The reality is not 
like your speculation.


Regards,

Eric


On 2018-04-04 11:50 AM, Zhu, Rex wrote:
>
> If user send message PPSMC_MSG_GetCurrPkgPwr, firmware will return the 
> pkgpwr immediately as current power value.
>
> as no PPSMC_MSG_GetCurrPkgPwr support, so send message  let firmware 
> write pkgpwr  to ixSMU_PM_STATUS_94,
>
> and driver delay 10ms to read ixSMU_PM_STATUS_94.
>
>
> I don't think there is any problem. otherwise, there is same issue on 
> polaris/vega.
>
>
> I clean ixSMU_PM_STATUS_94 before let smu write it.
>
> The delay time is enough unless we got 0 from ixSMU_PM_STATUS_94 .
>
>
>
> Best Regards
>
> Rex
>
>
>
> ------------------------------------------------------------------------
> *From:* amd-gfx <amd-gfx-bounces at lists.freedesktop.org> on behalf of 
> Eric Huang <jinhuieric.huang at amd.com>
> *Sent:* Wednesday, April 4, 2018 11:36 PM
> *To:* amd-gfx at lists.freedesktop.org
> *Subject:* Re: [PATCH 2/3] drm/amd/pp: Refine get_gpu_power for VI
> Sampling period is too short. The power reading value will be not
> aligned with AGT's. It will confuse user that why AMD provides two
> different power results.
>
> Regards,
> Eric
>
> On 2018-04-04 04:25 AM, Rex Zhu wrote:
> > 1. On polaris10/11/12, Sending smu message PPSMC_MSG_GetCurrPkgPwr to
> >     read currentpkgpwr directly.
> > 2. On Fiji/tonga/bonaire/hawwii, no PPSMC_MSG_GetCurrPkgPwr support.
> >     Send PPSMC_MSG_PmStatusLogStart to let smu write currentpkgpwr
> >     to ixSMU_PM_STATUS_94. this is asynchronous. need to delay no less
> >     than 1ms.
> > 3. Clean ixSMU_PM_STATUS_94 immediately when send 
> PPSMC_MSG_PmStatusLogStart
> >     to avoid read old value.
> > 4. delay 10 ms instand of 20 ms. so the result will more similar to
> >     the output of PPSMC_MSG_GetCurrPkgPwr.
> > 5. remove max power/vddc/vddci output to keep consistent with vega.
> > 6. for vddc/vddci power, we can calculate the average value per
> >     [10ms, 4s] in other interface if needed.
> >
> > Signed-off-by: Rex Zhu <Rex.Zhu at amd.com>
> > ---
> > drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 45 
> +++++++++++-------------
> >   1 file changed, 21 insertions(+), 24 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c 
> b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> > index 40f2f87..c0ce672 100644
> > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
> > @@ -3363,30 +3363,27 @@ static int smu7_get_pp_table_entry(struct 
> pp_hwmgr *hwmgr,
> >   static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr,
> >                struct pp_gpu_power *query)
> >   {
> > - PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr,
> > - PPSMC_MSG_PmStatusLogStart),
> > -                     "Failed to start pm status log!",
> > -                     return -1);
> > -
> > -     msleep_interruptible(20);
> > -
> > - PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr,
> > - PPSMC_MSG_PmStatusLogSample),
> > -                     "Failed to sample pm status log!",
> > -                     return -1);
> > -
> > -     query->vddc_power = cgs_read_ind_register(hwmgr->device,
> > -                     CGS_IND_REG__SMC,
> > -                     ixSMU_PM_STATUS_40);
> > -     query->vddci_power = cgs_read_ind_register(hwmgr->device,
> > -                     CGS_IND_REG__SMC,
> > -                     ixSMU_PM_STATUS_49);
> > -     query->max_gpu_power = cgs_read_ind_register(hwmgr->device,
> > -                     CGS_IND_REG__SMC,
> > -                     ixSMU_PM_STATUS_94);
> > -     query->average_gpu_power = cgs_read_ind_register(hwmgr->device,
> > -                     CGS_IND_REG__SMC,
> > -                     ixSMU_PM_STATUS_95);
> > +     if (!query)
> > +             return -EINVAL;
> > +
> > +     memset(query, 0, sizeof *query);
> > +
> > +     if (hwmgr->chip_id >= CHIP_POLARIS10) {
> > +             smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrPkgPwr);
> > +             query->average_gpu_power = 
> cgs_read_register(hwmgr->device, mmSMC_MSG_ARG_0);
> > +     } else {
> > +             smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
> > + cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
> > + ixSMU_PM_STATUS_94, 0);
> > +
> > +             msleep_interruptible(10);
> > +
> > +             smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample);
> > +
> > +             query->average_gpu_power = 
> cgs_read_ind_register(hwmgr->device,
> > + CGS_IND_REG__SMC,
> > + ixSMU_PM_STATUS_94);
> > +     }
> >
> >        return 0;
> >   }
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> amd-gfx Info Page - freedesktop.org 
> <https://lists.freedesktop.org/mailman/listinfo/amd-gfx>
> lists.freedesktop.org
> Subscribing to amd-gfx: Subscribe to amd-gfx by filling out the 
> following form. Use of all freedesktop.org lists is subject to our 
> Code of Conduct.
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180404/0c2efd4b/attachment-0001.html>


More information about the amd-gfx mailing list