[PATCH v9 16/18] drm/amd/display: Recalculate VCPI slots for new DSC connectors
Mikita Lipski
mlipski at amd.com
Mon Dec 23 19:05:21 UTC 2019
On 12/20/19 4:41 PM, Lyude Paul wrote:
> So I reviewed this already but realized I made a very silly mistake, comments
> down below:
>
> On Fri, 2019-12-13 at 15:08 -0500, mikita.lipski at amd.com wrote:
>> From: Mikita Lipski <mikita.lipski at amd.com>
>>
>> [why]
>> Since for DSC MST connector's PBN is claculated differently
>> due to compression, we have to recalculate both PBN and
>> VCPI slots for that connector.
>>
>> [how]
>> The function iterates through all the active streams to
>> find, which have DSC enabled, then recalculates PBN for
>> it and calls drm_dp_helper_update_vcpi_slots_for_dsc to
>> update connector's VCPI slots.
>>
>> v2: - use drm_dp_mst_atomic_enable_dsc per port to
>> enable/disable DSC
>>
>> v3: - Iterate through connector states from the state passed
>> - On each connector state get stream from dc_state,
>> instead CRTC state
>>
>> Reviewed-by: Lyude Paul <lyude at redhat.com>
>> Signed-off-by: Mikita Lipski <mikita.lipski at amd.com>
>> ---
>> .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 76 +++++++++++++++++--
>> 1 file changed, 71 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> index 93a230d956ee..2ac3a2f0b452 100644
>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
>> @@ -4986,6 +4986,69 @@ const struct drm_encoder_helper_funcs
>> amdgpu_dm_encoder_helper_funcs = {
>> .atomic_check = dm_encoder_helper_atomic_check
>> };
>>
>> +static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
>> + struct dc_state *dc_state)
>> +{
>> + struct dc_stream_state *stream = NULL;
>> + struct drm_connector *connector;
>> + struct drm_connector_state *new_con_state, *old_con_state;
>> + struct amdgpu_dm_connector *aconnector;
>> + struct dm_connector_state *dm_conn_state;
>> + int i, j, clock, bpp;
>> + int vcpi, pbn_div, pbn = 0;
>> +
>> + for_each_oldnew_connector_in_state(state, connector, old_con_state,
>> new_con_state, i) {
>> +
>> + aconnector = to_amdgpu_dm_connector(connector);
>> +
>> + if (!aconnector->port)
>> + continue;
>> +
>> + if (!new_con_state || !new_con_state->crtc)
>> + continue;
>> +
>> + dm_conn_state = to_dm_connector_state(new_con_state);
>> +
>> + for (j = 0; j < dc_state->stream_count; j++) {
>> + stream = dc_state->streams[j];
>> + if (!stream)
>> + continue;
>> +
>> + if ((struct amdgpu_dm_connector*)stream-
>>> dm_stream_context == aconnector)
>> + break;
>> +
>> + stream = NULL;
>> + }
>> +
>> + if (!stream)
>> + continue;
>> +
>> + if (stream->timing.flags.DSC != 1) {
>> + drm_dp_mst_atomic_enable_dsc(state,
>> + aconnector->port,
>> + dm_conn_state->pbn,
>> + 0,
>> + false);
>> + continue;
>> + }
>> +
>> + pbn_div = dm_mst_get_pbn_divider(stream->link);
>> + bpp = stream->timing.dsc_cfg.bits_per_pixel;
>> + clock = stream->timing.pix_clk_100hz / 10;
>> + pbn = drm_dp_calc_pbn_mode(clock, bpp, true);
>> + vcpi = drm_dp_mst_atomic_enable_dsc(state,
>> + aconnector->port,
>> + pbn, pbn_div,
>> + true);
>> + if (vcpi < 0)
>> + return vcpi;
>> +
>> + dm_conn_state->pbn = pbn;
>> + dm_conn_state->vcpi_slots = vcpi;
>> + }
>> + return 0;
>> +}
>> +
>> static void dm_drm_plane_reset(struct drm_plane *plane)
>> {
>> struct dm_plane_state *amdgpu_state = NULL;
>> @@ -8022,11 +8085,6 @@ static int amdgpu_dm_atomic_check(struct drm_device
>> *dev,
>> if (ret)
>> goto fail;
>>
>> - /* Perform validation of MST topology in the state*/
>> - ret = drm_dp_mst_atomic_check(state);
>> - if (ret)
>> - goto fail;
>> -
>> if (state->legacy_cursor_update) {
>> /*
>> * This is a fast cursor update coming from the plane update
>> @@ -8098,6 +8156,10 @@ static int amdgpu_dm_atomic_check(struct drm_device
>> *dev,
>> if (!compute_mst_dsc_configs_for_state(state, dm_state-
>>> context))
>> goto fail;
>>
>> + ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state-
>>> context);
>> + if (ret)
>> + goto fail;
>> +
>> if (dc_validate_global_state(dc, dm_state->context, false) !=
>> DC_OK) {
>> ret = -EINVAL;
>> goto fail;
>> @@ -8126,6 +8188,10 @@ static int amdgpu_dm_atomic_check(struct drm_device
>> *dev,
>> dc_retain_state(old_dm_state->context);
>> }
>> }
>> + /* Perform validation of MST topology in the state*/
>> + ret = drm_dp_mst_atomic_check(state);
>> + if (ret)
>> + goto fail;
>
> I realized that we actually should make it so that we actually expose a
> version of drm_dp_mst_atomic_check() which allows you to manually specify a
> drm_dp_mst_topology_state, because otherwise we're checking the bandwidth caps
> of _ALL_ enabled topologies which could cause us to fail just because another
> topology's new state doesn't meet the bandwidth requirements yet because we
> haven't readjusted it for the fair share compute algorithm.
>
But wouldn't we want to fail the whole atomic check even if one of the
topology states fails?
We call compute_mst_dsc_configs_for_state() function before
drm_dp_mst_atomic_check(), and during it driver will attmpt different
compression configurations untill drm_dp_mst_atomic_check() passes
because we call drm_dp_mst_atomic_check() inside
compute_mst_dsc_configs_for_state() every time configuration is readjusted.
> Also, I think we should probably differentiate in the atomic check functions
> between failing an atomic check for a topology state because it doesn't meet
> the bandwidth requirements we set, vs. a topology state failing atomic check
> for other reasons (temporary deadlock, too many payloads, etc.). So basically-
> we should return -ENOSPC when we fail because of a bandwidth (including VCPI
> slot allocation) issue.
>
Thanks, I will update drm_dp_mst_atomic_check_bw_limit() to return
appropriate error values on bw failure.
>>
>> /* Store the overall update type for use later in atomic check. */
>> for_each_new_crtc_in_state (state, crtc, new_crtc_state, i) {
--
Thanks,
Mikita Lipski
Software Engineer, AMD
mikita.lipski at amd.com
More information about the dri-devel
mailing list