[Intel-gfx] [Heads up to maintainers] Re: [PATCH v8 1/1] drm/drm_mst: Use Extended Base Receiver Capability DPCD space
Jani Nikula
jani.nikula at intel.com
Mon May 3 08:41:21 UTC 2021
On Mon, 03 May 2021, Jani Nikula <jani.nikula at intel.com> wrote:
> On Fri, 30 Apr 2021, Jani Nikula <jani.nikula at intel.com> wrote:
>> On Thu, 29 Apr 2021, Lyude Paul <lyude at redhat.com> wrote:
>>> JFYI Jani and Ben: I will be pushing this patch to drm-misc-next sometime
>>> today if there's no objections
>>
>> Thanks for the heads-up, I think this breaks i915. See my review
>> comments elsewhere in the thread.
>
> Looks like this was merged anyway.
>
> 98025a62cb00 ("drm/dp_mst: Use Extended Base Receiver Capability DPCD space")
>
> I'm not happy how this played out.
>
> 1) You need to Cc relevant people
>
> 2) You need to get the ack before merging changes
>
> 3) You need to give people more than a day to react, with time zones and
> all; I replied as soon as I saw the heads-up, but it was already too
> late
>
> It's broken on i915, and perhaps that could be fixed.
>
> However I also think using DP spec rate codes and calling them "rate" is
> a bad interface, especially when the unit breaks down with DP 2.0 rate
> codes. It's confusing and it's not future proof. Fixing that afterwards
> falls to whoever comes next to pick up the pieces.
>
> I'd rather just see this reverted and redone.
Okay, just saw that you'd fixed i915 already. Thanks. Let's roll with
that then.
BR,
Jani.
>
>
> BR,
> Jani.
>
>
>>
>> BR,
>> Jani.
>>
>>
>>>
>>> On Wed, 2021-04-28 at 19:43 -0400, Nikola Cornij wrote:
>>>> [why]
>>>> DP 1.4a spec madates that if DP_EXTENDED_RECEIVER_CAP_FIELD_PRESENT is
>>>> set, Extended Base Receiver Capability DPCD space must be used. Without
>>>> doing that, the three DPCD values that differ will be wrong, leading to
>>>> incorrect or limited functionality. MST link rate, for example, could
>>>> have a lower value. Also, Synaptics quirk wouldn't work out well when
>>>> Extended DPCD was not read, resulting in no DSC for such hubs.
>>>>
>>>> [how]
>>>> Modify MST topology manager to use the values from Extended DPCD where
>>>> applicable.
>>>>
>>>> To prevent regression on the sources that have a lower maximum link rate
>>>> capability than MAX_LINK_RATE from Extended DPCD, have the drivers
>>>> supply maximum lane count and rate at initialization time.
>>>>
>>>> This also reverts 'commit 2dcab875e763 ("Revert drm/dp_mst: Retrieve
>>>> extended DPCD caps for topology manager")', brining the change back to
>>>> the original 'commit ad44c03208e4 ("drm/dp_mst: Retrieve extended DPCD
>>>> caps for topology manager")'.
>>>>
>>>> Signed-off-by: Nikola Cornij <nikola.cornij at amd.com>
>>>> ---
>>>> .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 5 +++
>>>> .../gpu/drm/amd/display/dc/core/dc_link_dp.c | 18 ++++++++++
>>>> drivers/gpu/drm/amd/display/dc/dc_link.h | 2 ++
>>>> drivers/gpu/drm/drm_dp_mst_topology.c | 33 ++++++++++++-------
>>>> drivers/gpu/drm/i915/display/intel_dp_mst.c | 6 +++-
>>>> drivers/gpu/drm/nouveau/dispnv50/disp.c | 3 +-
>>>> drivers/gpu/drm/radeon/radeon_dp_mst.c | 7 ++++
>>>> include/drm/drm_dp_mst_helper.h | 12 ++++++-
>>>> 8 files changed, 71 insertions(+), 15 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
>>>> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
>>>> index 997567f6f0ba..b7e01b6fb328 100644
>>>> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
>>>> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
>>>> @@ -429,6 +429,8 @@ void amdgpu_dm_initialize_dp_connector(struct
>>>> amdgpu_display_manager *dm,
>>>> struct amdgpu_dm_connector
>>>> *aconnector,
>>>> int link_index)
>>>> {
>>>> + struct dc_link_settings max_link_enc_cap = {0};
>>>> +
>>>> aconnector->dm_dp_aux.aux.name =
>>>> kasprintf(GFP_KERNEL, "AMDGPU DM aux hw bus %d",
>>>> link_index);
>>>> @@ -443,6 +445,7 @@ void amdgpu_dm_initialize_dp_connector(struct
>>>> amdgpu_display_manager *dm,
>>>> if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
>>>> return;
>>>>
>>>> + dc_link_dp_get_max_link_enc_cap(aconnector->dc_link,
>>>> &max_link_enc_cap);
>>>> aconnector->mst_mgr.cbs = &dm_mst_cbs;
>>>> drm_dp_mst_topology_mgr_init(
>>>> &aconnector->mst_mgr,
>>>> @@ -450,6 +453,8 @@ void amdgpu_dm_initialize_dp_connector(struct
>>>> amdgpu_display_manager *dm,
>>>> &aconnector->dm_dp_aux.aux,
>>>> 16,
>>>> 4,
>>>> + max_link_enc_cap.lane_count,
>>>> + max_link_enc_cap.link_rate,
>>>> aconnector->connector_id);
>>>>
>>>> drm_connector_attach_dp_subconnector_property(&aconnector->base);
>>>> diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
>>>> b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
>>>> index 7d2e433c2275..6fe66b7ee53e 100644
>>>> --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
>>>> +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
>>>> @@ -1894,6 +1894,24 @@ bool dc_link_dp_sync_lt_end(struct dc_link *link,
>>>> bool link_down)
>>>> return true;
>>>> }
>>>>
>>>> +bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct
>>>> dc_link_settings *max_link_enc_cap)
>>>> +{
>>>> + if (!max_link_enc_cap) {
>>>> + DC_LOG_ERROR("%s: Could not return max link encoder caps",
>>>> __func__);
>>>> + return false;
>>>> + }
>>>> +
>>>> + if (link->link_enc->funcs->get_max_link_cap) {
>>>> + link->link_enc->funcs->get_max_link_cap(link->link_enc,
>>>> max_link_enc_cap);
>>>> + return true;
>>>> + }
>>>> +
>>>> + DC_LOG_ERROR("%s: Max link encoder caps unknown", __func__);
>>>> + max_link_enc_cap->lane_count = 1;
>>>> + max_link_enc_cap->link_rate = 6;
>>>> + return false;
>>>> +}
>>>> +
>>>> static struct dc_link_settings get_max_link_cap(struct dc_link *link)
>>>> {
>>>> struct dc_link_settings max_link_cap = {0};
>>>> diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h
>>>> b/drivers/gpu/drm/amd/display/dc/dc_link.h
>>>> index b0013e674864..cb6d0543d839 100644
>>>> --- a/drivers/gpu/drm/amd/display/dc/dc_link.h
>>>> +++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
>>>> @@ -346,6 +346,8 @@ bool dc_link_dp_set_test_pattern(
>>>> const unsigned char *p_custom_pattern,
>>>> unsigned int cust_pattern_size);
>>>>
>>>> +bool dc_link_dp_get_max_link_enc_cap(const struct dc_link *link, struct
>>>> dc_link_settings *max_link_enc_cap);
>>>> +
>>>> void dc_link_enable_hpd_filter(struct dc_link *link, bool enable);
>>>>
>>>> bool dc_link_is_dp_sink_present(struct dc_link *link);
>>>> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
>>>> b/drivers/gpu/drm/drm_dp_mst_topology.c
>>>> index 5539a91b4031..31d9f3ff0d39 100644
>>>> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
>>>> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
>>>> @@ -3708,19 +3708,24 @@ int drm_dp_mst_topology_mgr_set_mst(struct
>>>> drm_dp_mst_topology_mgr *mgr, bool ms
>>>> /* set the device into MST mode */
>>>> if (mst_state) {
>>>> struct drm_dp_payload reset_pay;
>>>> + int lane_count;
>>>> + int link_rate;
>>>>
>>>> WARN_ON(mgr->mst_primary);
>>>>
>>>> /* get dpcd info */
>>>> - ret = drm_dp_dpcd_read(mgr->aux, DP_DPCD_REV, mgr->dpcd,
>>>> DP_RECEIVER_CAP_SIZE);
>>>> - if (ret != DP_RECEIVER_CAP_SIZE) {
>>>> - drm_dbg_kms(mgr->dev, "failed to read DPCD\n");
>>>> + ret = drm_dp_read_dpcd_caps(mgr->aux, mgr->dpcd);
>>>> + if (ret < 0) {
>>>> + drm_dbg_kms(mgr->dev, "%s: failed to read DPCD, ret
>>>> %d\n",
>>>> + mgr->aux->name, ret);
>>>> goto out_unlock;
>>>> }
>>>>
>>>> + lane_count = min_t(int, mgr->dpcd[2] &
>>>> DP_MAX_LANE_COUNT_MASK, mgr->max_lane_count);
>>>> + link_rate = min_t(int, mgr->dpcd[1], mgr->max_link_rate);
>>>> mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr,
>>>> -
>>>> drm_dp_bw_code_to_link
>>>> _rate(mgr->dpcd[1]),
>>>> - mgr->dpcd[2] &
>>>> DP_MAX_LANE_COUNT_MASK);
>>>> + drm_dp_bw_code_to_li
>>>> nk_rate(link_rate),
>>>> + lane_count);
>>>> if (mgr->pbn_div == 0) {
>>>> ret = -EINVAL;
>>>> goto out_unlock;
>>>> @@ -5448,14 +5453,17 @@ EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
>>>> * @aux: DP helper aux channel to talk to this device
>>>> * @max_dpcd_transaction_bytes: hw specific DPCD transaction limit
>>>> * @max_payloads: maximum number of payloads this GPU can source
>>>> + * @max_lane_count: maximum number of lanes this GPU supports
>>>> + * @max_link_rate: maximum link rate this GPU supports, units as in DPCD
>>>> * @conn_base_id: the connector object ID the MST device is connected to.
>>>> *
>>>> * Return 0 for success, or negative error code on failure
>>>> */
>>>> int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
>>>> struct drm_device *dev, struct drm_dp_aux
>>>> *aux,
>>>> - int max_dpcd_transaction_bytes,
>>>> - int max_payloads, int conn_base_id)
>>>> + int max_dpcd_transaction_bytes, int
>>>> max_payloads,
>>>> + int max_lane_count, int max_link_rate,
>>>> + int conn_base_id)
>>>> {
>>>> struct drm_dp_mst_topology_state *mst_state;
>>>>
>>>> @@ -5490,6 +5498,8 @@ int drm_dp_mst_topology_mgr_init(struct
>>>> drm_dp_mst_topology_mgr *mgr,
>>>> mgr->aux = aux;
>>>> mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes;
>>>> mgr->max_payloads = max_payloads;
>>>> + mgr->max_lane_count = max_lane_count;
>>>> + mgr->max_link_rate = max_link_rate;
>>>> mgr->conn_base_id = conn_base_id;
>>>> if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 ||
>>>> max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8)
>>>> @@ -5896,14 +5906,13 @@ struct drm_dp_aux
>>>> *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port)
>>>> if (drm_dp_has_quirk(&desc, DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD)
>>>> &&
>>>> port->mgr->dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14 &&
>>>> port->parent == port->mgr->mst_primary) {
>>>> - u8 downstreamport;
>>>> + u8 dpcd_ext[DP_RECEIVER_CAP_SIZE];
>>>>
>>>> - if (drm_dp_dpcd_read(&port->aux, DP_DOWNSTREAMPORT_PRESENT,
>>>> - &downstreamport, 1) < 0)
>>>> + if (drm_dp_read_dpcd_caps(port->mgr->aux, dpcd_ext) < 0)
>>>> return NULL;
>>>>
>>>> - if ((downstreamport & DP_DWN_STRM_PORT_PRESENT) &&
>>>> - ((downstreamport & DP_DWN_STRM_PORT_TYPE_MASK)
>>>> + if ((dpcd_ext[DP_DOWNSTREAMPORT_PRESENT] &
>>>> DP_DWN_STRM_PORT_PRESENT) &&
>>>> + ((dpcd_ext[DP_DOWNSTREAMPORT_PRESENT] &
>>>> DP_DWN_STRM_PORT_TYPE_MASK)
>>>> != DP_DWN_STRM_PORT_TYPE_ANALOG))
>>>> return port->mgr->aux;
>>>> }
>>>> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
>>>> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
>>>> index 860381d68d9d..a4245eb48ef4 100644
>>>> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
>>>> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
>>>> @@ -942,6 +942,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port
>>>> *dig_port, int conn_base_id)
>>>> struct intel_dp *intel_dp = &dig_port->dp;
>>>> enum port port = dig_port->base.port;
>>>> int ret;
>>>> + int bios_max_link_rate;
>>>>
>>>> if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp))
>>>> return 0;
>>>> @@ -956,8 +957,11 @@ intel_dp_mst_encoder_init(struct intel_digital_port
>>>> *dig_port, int conn_base_id)
>>>>
>>>> /* create encoders */
>>>> intel_dp_create_fake_mst_encoders(dig_port);
>>>> + bios_max_link_rate = intel_bios_dp_max_link_rate(&dig_port->base);
>>>> ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm,
>>>> - &intel_dp->aux, 16, 3,
>>>> conn_base_id);
>>>> + &intel_dp->aux, 16, 3,
>>>> + dig_port->max_lanes,
>>>> + bios_max_link_rate / 27000,
>>>> conn_base_id);
>>>> if (ret)
>>>> return ret;
>>>>
>>>> diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c
>>>> b/drivers/gpu/drm/nouveau/dispnv50/disp.c
>>>> index 1c9c0cdf85db..e02f9d2d74eb 100644
>>>> --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
>>>> +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
>>>> @@ -1617,7 +1617,8 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct
>>>> drm_dp_aux *aux, int aux_max,
>>>> mstm->mgr.cbs = &nv50_mstm;
>>>>
>>>> ret = drm_dp_mst_topology_mgr_init(&mstm->mgr, dev, aux, aux_max,
>>>> - max_payloads, conn_base_id);
>>>> + max_payloads, outp->dcb-
>>>> >dpconf.link_nr,
>>>> + outp->dcb->dpconf.link_bw,
>>>> conn_base_id);
>>>> if (ret)
>>>> return ret;
>>>>
>>>> diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c
>>>> b/drivers/gpu/drm/radeon/radeon_dp_mst.c
>>>> index 59cf1d288465..8f0b2dccd199 100644
>>>> --- a/drivers/gpu/drm/radeon/radeon_dp_mst.c
>>>> +++ b/drivers/gpu/drm/radeon/radeon_dp_mst.c
>>>> @@ -629,13 +629,20 @@ int
>>>> radeon_dp_mst_init(struct radeon_connector *radeon_connector)
>>>> {
>>>> struct drm_device *dev = radeon_connector->base.dev;
>>>> + int max_link_rate;
>>>>
>>>> if (!radeon_connector->ddc_bus->has_aux)
>>>> return 0;
>>>>
>>>> + if (radeon_connector_is_dp12_capable(&radeon_connector->base))
>>>> + max_link_rate = 0x14;
>>>> + else
>>>> + max_link_rate = 0x0a;
>>>> +
>>>> radeon_connector->mst_mgr.cbs = &mst_cbs;
>>>> return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev,
>>>> &radeon_connector->ddc_bus->aux,
>>>> 16, 6,
>>>> + 4, max_link_rate,
>>>> radeon_connector->base.base.id);
>>>> }
>>>>
>>>> diff --git a/include/drm/drm_dp_mst_helper.h
>>>> b/include/drm/drm_dp_mst_helper.h
>>>> index 20dc705642bd..b5b0bf37813b 100644
>>>> --- a/include/drm/drm_dp_mst_helper.h
>>>> +++ b/include/drm/drm_dp_mst_helper.h
>>>> @@ -593,6 +593,14 @@ struct drm_dp_mst_topology_mgr {
>>>> * @max_payloads: maximum number of payloads the GPU can generate.
>>>> */
>>>> int max_payloads;
>>>> + /**
>>>> + * @max_lane_count: maximum number of lanes the GPU can drive.
>>>> + */
>>>> + int max_lane_count;
>>>> + /**
>>>> + * @max_link_rate: maximum link rate per lane GPU can output.
>>>> + */
>>>> + int max_link_rate;
>>>> /**
>>>> * @conn_base_id: DRM connector ID this mgr is connected to. Only
>>>> used
>>>> * to build the MST connector path value.
>>>> @@ -765,7 +773,9 @@ struct drm_dp_mst_topology_mgr {
>>>> int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
>>>> struct drm_device *dev, struct drm_dp_aux
>>>> *aux,
>>>> int max_dpcd_transaction_bytes,
>>>> - int max_payloads, int conn_base_id);
>>>> + int max_payloads,
>>>> + int max_lane_count, int max_link_rate,
>>>> + int conn_base_id);
>>>>
>>>> void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr);
>>>>
--
Jani Nikula, Intel Open Source Graphics Center
More information about the Intel-gfx
mailing list