[PATCH 4/6] drm/amd/powerplay: add sclk OD support on Fiji

Emil Velikov emil.l.velikov at gmail.com
Mon May 16 22:04:10 UTC 2016


On 16 May 2016 at 15:42, Alex Deucher <alexdeucher at gmail.com> wrote:
> On Sat, May 14, 2016 at 10:03 AM, Emil Velikov <emil.l.velikov at gmail.com> wrote:
>> Hi all,
>>
>> On 13 May 2016 at 19:48, Alex Deucher <alexdeucher at gmail.com> wrote:
>>> From: Eric Huang <JinHuiEric.Huang at amd.com>
>>>
>>> This implements sclk overdrive(OD) overclocking support for Fiji,
>>> and the maximum overdrive percentage is 20.
>>>
>>> Reviewed-by: Alex Deucher <alexander.deucher at amd.com>
>>> Signed-off-by: Eric Huang <JinHuiEric.Huang at amd.com>
>>> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
>>> ---
>>>  drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c | 43 ++++++++++++++++++++++++
>>>  1 file changed, 43 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
>>> index 6f1bad4..bf7bf5f 100644
>>> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
>>> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
>>> @@ -5274,6 +5274,47 @@ bool fiji_check_smc_update_required_for_display_configuration(struct pp_hwmgr *h
>>>         return is_update_required;
>>>  }
>>>
>>> +static int fiji_get_sclk_od(struct pp_hwmgr *hwmgr)
>>> +{
>>> +       struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
>>> +       struct fiji_single_dpm_table *sclk_table = &(data->dpm_table.sclk_table);
>>> +       struct fiji_single_dpm_table *golden_sclk_table =
>>> +                       &(data->golden_dpm_table.sclk_table);
>>> +       int value;
>>> +
>>> +       value = (sclk_table->dpm_levels[sclk_table->count - 1].value -
>>> +                       golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value) *
>>> +                       100 /
>>> +                       golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
>>> +
>>> +       return value;
>>> +}
>>> +
>>> +static int fiji_set_sclk_od(struct pp_hwmgr *hwmgr, uint32_t value)
>>> +{
>>> +       struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
>>> +       struct fiji_single_dpm_table *golden_sclk_table =
>>> +                       &(data->golden_dpm_table.sclk_table);
>>> +       struct pp_power_state  *ps;
>>> +       struct fiji_power_state  *fiji_ps;
>>> +
>>> +       if (value > 20)
>>> +               value = 20;
>>> +
>>> +       ps = hwmgr->request_ps;
>>> +
>>> +       if (ps == NULL)
>>> +               return -EINVAL;
>>> +
>>> +       fiji_ps = cast_phw_fiji_power_state(&ps->hardware);
>>> +
>>> +       fiji_ps->performance_levels[fiji_ps->performance_level_count - 1].engine_clock =
>>> +                       golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value *
>>> +                       value / 100 +
>>> +                       golden_sclk_table->dpm_levels[golden_sclk_table->count - 1].value;
>>> +
>>> +       return 0;
>>> +}
>>>
>> Is it me or this patch 5/6 and 6/6 are identical ? Apart from the
>> structs of course which... seems to be identical as well despite that
>> they are copied across different headers and given different names.
>> Imho one should try to hold them alongside this (and other related)
>> code as opposed to duplicating even more code ?
>>
>
> If someone was motived.
I guess the people working/maintaining it should be motivated. Less
code, easier to maintain, debug, smaller binary and a bunch other
arguments that you know far better than me.

> The hw is similar, but not quite the same.
> Generally new asics start with a copy of the old structs and then we
> add/remove stuff to handle the difference in feature sets.
> Unfortunately, power management is a bit less well defined compared to
> other IP blocks since it has fingers into all of the other blocks.
> With amdgpu, we've generally tried to keep IP blocks versions distinct
> even if they are pretty similar and results in slightly more code to
> avoid regressions on different asics due to differences in hw behavior
> and changes in register offsets, etc.
>
So that's the root here - one copies the code and then makes changes if any.

If people are using the "let's copy it all just in case for the
future" approach this sounds like over-engineering (according to some
people). Or perhaps it serves as a nice way to boost the stats -
kernel is not XX lines of code :-P

Fwiw I would strongly encourage that you and/or members of the team
take a look at nouveau's structure [1]. It has handled duplication
across generations, abstraction layers, using the driver as an user
space application, etc., in a very elegant manner, imho. I believe
others have mentioned/given it as an example, as well ;-)


Thanks
Emil

[1] https://cgit.freedesktop.org/~darktama/nouveau/


More information about the dri-devel mailing list