[PATCH 3/3] drm/amd/display: update bw_calcs to take pipe sync into account (v2)

Kazlauskas, Nicholas Nicholas.Kazlauskas at amd.com
Thu Aug 22 16:35:37 UTC 2019


On 8/22/19 12:31 PM, Alex Deucher wrote:
> On Thu, Aug 22, 2019 at 12:25 PM Kazlauskas, Nicholas
> <Nicholas.Kazlauskas at amd.com> wrote:
>>
>> On 8/22/19 11:36 AM, Alex Deucher wrote:
>>> Properly set all_displays_in_sync so that when the data is
>>> propagated to powerplay, it's set properly and we can enable
>>> mclk switching when all monitors are in sync.
>>>
>>> v2: fix logic, clean up
>>>
>>> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
>>> ---
>>>    .../gpu/drm/amd/display/dc/calcs/dce_calcs.c  | 49 ++++++++++++++++++-
>>>    1 file changed, 47 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
>>> index 9f12e21f8b9b..8d904647fb0f 100644
>>> --- a/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
>>> +++ b/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
>>> @@ -25,6 +25,7 @@
>>>
>>>    #include <linux/slab.h>
>>>
>>> +#include "resource.h"
>>>    #include "dm_services.h"
>>>    #include "dce_calcs.h"
>>>    #include "dc.h"
>>> @@ -2977,6 +2978,50 @@ static void populate_initial_data(
>>>        data->number_of_displays = num_displays;
>>>    }
>>>
>>> +static bool all_displays_in_sync(const struct pipe_ctx pipe[],
>>> +                              int pipe_count,
>>> +                              uint32_t number_of_displays)
>>> +{
>>> +     const struct pipe_ctx *unsynced_pipes[MAX_PIPES] = { NULL };
>>> +     int group_size = 1;
>>> +     int i, j;
>>> +
>>> +     for (i = 0; i < pipe_count; i++) {
>>> +             if (!pipe[i].stream)
>>
>> This bit differs from program_timing_sync, but since this is for dce and
>> we don't do pipe split or MPO I think it's probably fine that you're not
>> checking top_pipe here.
>>
>> Wouldn't hurt to have that logic here though.
>>
> 
> I had checked for top_pipe here originally, but it was always NULL so
> unsynced_pipes never got populated.  Maybe that is not populated
> properly at this point?

The presence of a top_pipe on a pipe indicates that the pipe is part of 
a blending chain. A NULL top_pipe value indicates that the current pipe 
is the top of the chain.

It should be NULL for all pipes on DCE ASICs.

Nicholas Kazlauskas

> 
>>> +                     continue;
>>> +
>>> +             unsynced_pipes[i] = &pipe[i];
>>> +     }
>>> +
>>> +     for (i = 0; i < pipe_count; i++) {
>>> +             const struct pipe_ctx *pipe_set[MAX_PIPES];
>>> +
>>> +             if (!unsynced_pipes[i])
>>> +                     continue;
>>> +
>>> +             pipe_set[0] = unsynced_pipes[i];
>>> +             unsynced_pipes[i] = NULL;
>>> +
>>> +             /* Add tg to the set, search rest of the tg's for ones with
>>> +              * same timing, add all tgs with same timing to the group
>>> +              */
>>> +             for (j = i + 1; j < pipe_count; j++) {
>>> +                     if (!unsynced_pipes[j])
>>> +                             continue;
>>> +
>>> +                     if (resource_are_streams_timing_synchronizable(
>>> +                                     unsynced_pipes[j]->stream,
>>> +                                     pipe_set[0]->stream)) {
>>> +                             pipe_set[group_size] = unsynced_pipes[j];
>>> +                             unsynced_pipes[j] = NULL;
>>> +                             group_size++;
>>> +                     }
>>> +             }
>>> +     }
>>> +
>>> +     return (group_size == number_of_displays) ? true : false;
>>
>> I think this logic is functional but it looks incorrect at first glance
>> because group_size doesn't get reset. What ends up happening is the
>> first pipe of each group doesn't get added to group_size.
>>
>> I feel that this would be more clear as:
>>
>> static bool all_displays_in_sync(const struct pipe_ctx pipe[], int
>> pipe_count)
>> {
>>          const struct pipe_ctx *active_pipes[MAX_PIPES];
>>          int i, num_active_pipes = 0;
>>
>>          for (i = 0; i < pipe_count; i++) {
>>                  if (!pipe[i].stream || pipe[i].top_pipe)
>>                          continue;
>>
>>                  active_pipes[num_active_pipes++] = &pipe[i];
>>          }
>>
>>          if (!num_active_pipes)
>>                  return false;
>>
>>          for (i = 1; i < num_active_pipes; ++i)
>>                  if (!resource_are_streams_timing_synchronizable(
>>                              active_pipes[0]->stream, active_pipes[i]->stream))
>>                          return false;
>>
>>          return true;
>> }
> 
> Yes, that's much cleaner.  Thanks!
> 
> Alex
> 
>>
>> But I haven't tested this.
>>
>> Nicholas Kazlauskas
>>
>>
>>> +}
>>> +
>>>    /**
>>>     * Return:
>>>     *  true -  Display(s) configuration supported.
>>> @@ -2998,8 +3043,8 @@ bool bw_calcs(struct dc_context *ctx,
>>>
>>>        populate_initial_data(pipe, pipe_count, data);
>>>
>>> -     /*TODO: this should be taken out calcs output and assigned during timing sync for pplib use*/
>>> -     calcs_output->all_displays_in_sync = false;
>>> +     calcs_output->all_displays_in_sync = all_displays_in_sync(pipe, pipe_count,
>>> +                                                               data->number_of_displays);
>>>
>>>        if (data->number_of_displays != 0) {
>>>                uint8_t yclk_lvl, sclk_lvl;
>>>
>>



More information about the amd-gfx mailing list