[PATCH 6/8] drm/msm/dpu: use dpu_perf_cfg in DPU core_perf code
Konrad Dybcio
konrad.dybcio at linaro.org
Tue Jun 20 11:31:23 UTC 2023
On 20.06.2023 13:18, Dmitry Baryshkov wrote:
> On 20/06/2023 13:55, Konrad Dybcio wrote:
>> On 20.06.2023 02:08, Dmitry Baryshkov wrote:
>>> Simplify dpu_core_perf code by using only dpu_perf_cfg instead of using
>>> full-featured catalog data.
>>>
>>> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov at linaro.org>
>>> ---
>> Acked-by: Konrad Dybcio <konrad.dybcio at linaro.org>
>>
>> Check below.
>>
>>> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 52 ++++++++-----------
>>> drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h | 8 +--
>>> drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 2 +-
>>> 3 files changed, 27 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
>>> index 773e641eab28..78a7e3ea27a4 100644
>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
>>> @@ -19,11 +19,11 @@
>>>
>>> /**
>>> * _dpu_core_perf_calc_bw() - to calculate BW per crtc
>>> - * @kms: pointer to the dpu_kms
>>> + * @perf_cfg: performance configuration
>>> * @crtc: pointer to a crtc
>>> * Return: returns aggregated BW for all planes in crtc.
>>> */
>>> -static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
>>> +static u64 _dpu_core_perf_calc_bw(const struct dpu_perf_cfg *perf_cfg,
>>> struct drm_crtc *crtc)
>>> {
>>> struct drm_plane *plane;
>>> @@ -39,7 +39,7 @@ static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
>>> crtc_plane_bw += pstate->plane_fetch_bw;
>>> }
>>>
>>> - bw_factor = kms->catalog->perf->bw_inefficiency_factor;
>>> + bw_factor = perf_cfg->bw_inefficiency_factor;
>> It's set to 120 for all SoCs.. and it sounds very much like some kind of a
>> hack.
>>
>> The 105 on the other inefficiency factor is easy to spot:
>>
>> (1024/1000)^2 = 1.048576 =~= 1.05 = 105%
>>
>> It comes from a MiB-MB-MHz conversion that Qcom splattered all over
>> downstream as due to ancient tragical design decisions in msmbus
>> (which leak to the downstream interconnect a bit):
>
> This doesn't describe, why msm8226 and msm8974 had qcom,mdss-clk-factor
> of 5/4. And 8084 got 1.05 as usual. I can only suppose that MDSS 1.0
> (8974 v1) and 1.1 (8226) had some internal inefficiency / issues.
>
> Also, this 1.05 is a clock inefficiency, so it should not be related
> to msm bus client code.
Right. Maybe Abhinav could shed some light on this.
Konrad
>
>>
>> The logic needs to get some input that corresponds to a clock rate
>> of a bus clock (19.2, 200, 300 Mhz etc.) but the APIs expect a Kbps
>> value. So at one point they invented a MHZ_TO_MBPS macro which did this
>> conversion the other way around and probably had to account for it.
>>
>> I think they tried to make it make more sense, but it ended up being
>> even more spaghetti :/
>>
>> Not yet sure how it's done on RPMh icc, but with SMD RPM, passing e.g.
>>
>> opp-peak-kBps = <(200 * 8 * 1000)>; # 200 MHz * 8-wide * KHz-to-MHz
>>
>> results in a "correct" end rate.
>>
>> Konrad
>>> if (bw_factor) {
>>> crtc_plane_bw *= bw_factor;
>>> do_div(crtc_plane_bw, 100);
>
>
> --
> With best wishes
> Dmitry
More information about the dri-devel
mailing list