[PATCH 05/12] drm/amd/display: Fix bunch of warnings in DC
Kai Wasserbäch
kai at dev.carbon-project.org
Thu Dec 8 19:52:02 UTC 2016
Harry Wentland wrote on 08.12.2016 19:58:
> On 2016-12-08 11:58 AM, Kai Wasserbäch wrote:
>> Harry Wentland wrote on 08.12.2016 17:36:
>>> On 2016-12-08 11:22 AM, Kai Wasserbäch wrote:
>>>> [Please CC me on all replies, I'm not subscribed to the list.]
>>>>
>>>> Harry Wentland wrote on 08.12.2016 02:26:
>>>>> Some of those are potential bugs
>>>> Are they related? If not: maybe split the changes up?
>>>>
>>>>> Change-Id: I53b2c663e18b57013e1b891fc2ecf1fb2d7d3a08
>>>>> Signed-off-by: Harry Wentland <harry.wentland at amd.com>
>>>>> Reviewed-by: Tony Cheng <Tony.Cheng at amd.com>
>>>> It'd be nice to see these reviews on list... (applies to all patches here)
>>> These patches currently start out in our internal tree with a whole lot of stuff
>>> that we can't open up, like new IP, so unfortunately can't be done in a public
>>> forum. I pull the r-b from our internal review system for reference when I
>>> prepare the patches for public review.
>> Hm, not nice, but not my place to tell you to change this. That's Dave's and
>> Linus' job. ;-)
>>
>>>>> Acked-by: Harry Wentland <Harry.Wentland at amd.com>
>>>>> ---
>>>>> .../gpu/drm/amd/display/dc/calcs/bandwidth_calcs.c | 2 +-
>>>>> drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 5 +--
>>>>> .../gpu/drm/amd/display/dc/dce/dce_clock_source.c | 5 ---
>>>>> drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c | 43
>>>>> +---------------------
>>>>> drivers/gpu/drm/amd/display/dc/dce/dce_transform.c | 2 -
>>>>> .../drm/amd/display/dc/dce110/dce110_transform_v.c | 3 +-
>>>>> 6 files changed, 6 insertions(+), 54 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/display/dc/calcs/bandwidth_calcs.c
>>>>> b/drivers/gpu/drm/amd/display/dc/calcs/bandwidth_calcs.c
>>>>> index 0b2bb3992f1a..3b0710ef4716 100644
>>>>> --- a/drivers/gpu/drm/amd/display/dc/calcs/bandwidth_calcs.c
>>>>> +++ b/drivers/gpu/drm/amd/display/dc/calcs/bandwidth_calcs.c
>>>>> @@ -1709,7 +1709,7 @@ static void calculate_bandwidth(
>>>>> else {
>>>>> data->blackout_recovery_time =
>>>>> bw_max2(data->blackout_recovery_time, bw_add(bw_mul(bw_int_to_fixed(2),
>>>>> vbios->mcifwrmc_urgent_latency),
>>>>> data->mcifwr_burst_time[data->y_clk_level][data->sclk_level]));
>>>>> if (bw_ltn(data->adjusted_data_buffer_size[k],
>>>>> bw_mul(bw_div(bw_mul(data->display_bandwidth[k],
>>>>> data->useful_bytes_per_request[k]), data->bytes_per_request[k]),
>>>>> (bw_add(vbios->blackout_duration, bw_add(bw_mul(bw_int_to_fixed(2),
>>>>> vbios->mcifwrmc_urgent_latency),
>>>>> data->mcifwr_burst_time[data->y_clk_level][data->sclk_level])))))) {
>>>>> - data->blackout_recovery_time =
>>>>> bw_max2(data->blackout_recovery_time,
>>>>> bw_div((bw_add(bw_mul(bw_div(bw_mul(data->display_bandwidth[k],
>>>>> data->useful_bytes_per_request[k]), data->bytes_per_request[k]),
>>>>> vbios->blackout_duration),
>>>>> bw_sub(bw_div(bw_mul(bw_mul(bw_mul((bw_add(bw_add(bw_mul(bw_int_to_fixed(2),
>>>>> vbios->mcifwrmc_urgent_latency), data->dmif_burst_time[i][j]),
>>>>> data->mcifwr_burst_time[data->y_clk_level][data->sclk_level])),
>>>>> data->dispclk), bw_int_to_fixed(data->bytes_per_pixel[k])),
>>>>> data->lines_interleaved_in_mem_access[k]), data->latency_hiding_lines[k]),
>>>>> data->adjusted_data_buffer_size[k]))),
>>>>> (bw_sub(bw_div(bw_mul(bw_mul(data->dispclk,
>>>>> bw_int_to_fixed(data->bytes_per_pixel[k])),
>>>>> data->lines_interleaved_in_mem_access[k]), data->latency_hiding_lines[k]),
>>>>> bw_div(bw_mul(data->display_bandwidth[k], data->useful_bytes_per_request[k]),
>>>>> data->bytes_per_request[k])))));
>>>>> + data->blackout_recovery_time =
>>>>> bw_max2(data->blackout_recovery_time,
>>>>> bw_div((bw_add(bw_mul(bw_div(bw_mul(data->display_bandwidth[k],
>>>>> data->useful_bytes_per_request[k]), data->bytes_per_request[k]),
>>>>> vbios->blackout_duration),
>>>>> bw_sub(bw_div(bw_mul(bw_mul(bw_mul((bw_add(bw_add(bw_mul(bw_int_to_fixed(2),
>>>>> vbios->mcifwrmc_urgent_latency),
>>>>> data->dmif_burst_time[data->y_clk_level][data->sclk_level]),
>>>>> data->mcifwr_burst_time[data->y_clk_level][data->sclk_level])),
>>>>> data->dispclk), bw_int_to_fixed(data->bytes_per_pixel[k])),
>>>>> data->lines_interleaved_in_mem_access[k]), data->latency_hiding_lines[k]),
>>>>> data->adjusted_data_buffer_size[k]))),
>>>>> (bw_sub(bw_div(bw_mul(bw_mul(data->dispclk,
>>>>> bw_int_to_fixed(data->bytes_per_pixel[k])),
>>>>> data->lines_interleaved_in_mem_access[k]), data->latency_hiding_lines[k]),
>>>>> bw_div(bw_mul(data->display_bandwidth[k], data->useful_bytes_per_request[k]),
>>>>> data->bytes_per_request[k])))));
>>>> Uhg, can this be wrapped/split up somehow? Seeing the changes is nigh
>>>> impossible
>>>> and I suspect it might even be impossible for you to check internally.
>>> I believe this comes straight from some complicated code HW gives us and we use
>>> directly. I'll see if we can somehow make this more readable or at least add a
>>> comment, as you mention below.
>> That would be really appreciated. I mean, I'm in no way able to review
>> DAL/Display thoroughly (huge code base I'm not familiar with and where I might
>> easily miss interactions), but this is just not reviewable/verifiable at all for
>> any person outside your inner circle.
> I followed up with Tony, who's been working with HW guys to get them to share
> this code with us. Unfortunately there isn't much we can do in the way of
> changing this code.
That's rather unfortunate, that not even reformatting into an easier to read
format is possible (though I do not really understand why, beyond "it takes
work"); maybe the comment showing the formula in an easier to read form can be
done then?
Cheers,
Kai
P.S.: Did you see my comments on patch 2? That contained the null pointer deref
(if I didn't misread something).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 659 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20161208/0d68666b/attachment-0001.sig>
More information about the amd-gfx
mailing list