[PATCH v3] drm/dp_mst: Rewrite and fix bandwidth limit checks
Alex Deucher
alexdeucher at gmail.com
Mon Mar 9 21:15:19 UTC 2020
On Mon, Mar 9, 2020 at 5:01 PM Lyude Paul <lyude at redhat.com> wrote:
>
> Sigh, this is mostly my fault for not giving commit cd82d82cbc04
> ("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
> enough scrutiny during review. The way we're checking bandwidth
> limitations here is mostly wrong:
>
> For starters, drm_dp_mst_atomic_check_bw_limit() determines the
> pbn_limit of a branch by simply scanning each port on the current branch
> device, then uses the last non-zero full_pbn value that it finds. It
> then counts the sum of the PBN used on each branch device for that
> level, and compares against the full_pbn value it found before.
>
> This is wrong because ports can and will have different PBN limitations
> on many hubs, especially since a number of DisplayPort hubs out there
> will be clever and only use the smallest link rate required for each
> downstream sink - potentially giving every port a different full_pbn
> value depending on what link rate it's trained at. This means with our
> current code, which max PBN value we end up with is not well defined.
>
> Additionally, we also need to remember when checking bandwidth
> limitations that the top-most device in any MST topology is a branch
> device, not a port. This means that the first level of a topology
> doesn't technically have a full_pbn value that needs to be checked.
> Instead, we should assume that so long as our VCPI allocations fit we're
> within the bandwidth limitations of the primary MSTB.
>
> We do however, want to check full_pbn on every port including those of
> the primary MSTB. However, it's important to keep in mind that this
> value represents the minimum link rate /between a port's sink or mstb,
> and the mstb itself/. A quick diagram to explain:
>
> MSTB #1
> / \
> / \
> Port #1 Port #2
> full_pbn for Port #1 → | | ← full_pbn for Port #2
> Sink #1 MSTB #2
> |
> etc...
>
> Note that in the above diagram, the combined PBN from all VCPI
> allocations on said hub should not exceed the full_pbn value of port #2,
> and the display configuration on sink #1 should not exceed the full_pbn
> value of port #1. However, port #1 and port #2 can otherwise consume as
> much bandwidth as they want so long as their VCPI allocations still fit.
>
> And finally - our current bandwidth checking code also makes the mistake
> of not checking whether something is an end device or not before trying
> to traverse down it.
>
> So, let's fix it by rewriting our bandwidth checking helpers. We split
> the function into one part for handling branches which simply adds up
> the total PBN on each branch and returns it, and one for checking each
> port to ensure we're not going over its PBN limit. Phew.
>
> This should fix regressions seen, where we erroneously reject display
> configurations due to thinking they're going over our bandwidth limits
> when they're not.
>
> Changes since v1:
> * Took an even closer look at how PBN limitations are supposed to be
> handled, and did some experimenting with Sean Paul. Ended up rewriting
> these helpers again, but this time they should actually be correct!
> Changes since v2:
> * Small indenting fix
> * Fix pbn_used check in drm_dp_mst_atomic_check_port_bw_limit()
>
> Signed-off-by: Lyude Paul <lyude at redhat.com>
> Fixes: cd82d82cbc04 ("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
> Cc: Mikita Lipski <mikita.lipski at amd.com>
> Cc: Sean Paul <seanpaul at google.com>
> Cc: Hans de Goede <hdegoede at redhat.com>
Thanks for the detailed descriptions. The changes make sense to me,
but I don't know the DP MST code that well, so patches 2-4 are:
Acked-by: Alex Deucher <alexander.deucher at amd.com>
> ---
> drivers/gpu/drm/drm_dp_mst_topology.c | 119 ++++++++++++++++++++------
> 1 file changed, 93 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
> index b81ad444c24f..d2f464bdcfff 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -4841,41 +4841,102 @@ static bool drm_dp_mst_port_downstream_of_branch(struct drm_dp_mst_port *port,
> return false;
> }
>
> -static inline
> -int drm_dp_mst_atomic_check_bw_limit(struct drm_dp_mst_branch *branch,
> - struct drm_dp_mst_topology_state *mst_state)
> +static int
> +drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port,
> + struct drm_dp_mst_topology_state *state);
> +
> +static int
> +drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb,
> + struct drm_dp_mst_topology_state *state)
> {
> - struct drm_dp_mst_port *port;
> struct drm_dp_vcpi_allocation *vcpi;
> - int pbn_limit = 0, pbn_used = 0;
> + struct drm_dp_mst_port *port;
> + int pbn_used = 0, ret;
> + bool found = false;
>
> - list_for_each_entry(port, &branch->ports, next) {
> - if (port->mstb)
> - if (drm_dp_mst_atomic_check_bw_limit(port->mstb, mst_state))
> - return -ENOSPC;
> + /* Check that we have at least one port in our state that's downstream
> + * of this branch, otherwise we can skip this branch
> + */
> + list_for_each_entry(vcpi, &state->vcpis, next) {
> + if (!vcpi->pbn ||
> + !drm_dp_mst_port_downstream_of_branch(vcpi->port, mstb))
> + continue;
>
> - if (port->full_pbn > 0)
> - pbn_limit = port->full_pbn;
> + found = true;
> + break;
> }
> - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch has %d PBN available\n",
> - branch, pbn_limit);
> + if (!found)
> + return 0;
>
> - list_for_each_entry(vcpi, &mst_state->vcpis, next) {
> - if (!vcpi->pbn)
> - continue;
> + if (mstb->port_parent)
> + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] Checking bandwidth limits on [MSTB:%p]\n",
> + mstb->port_parent->parent, mstb->port_parent,
> + mstb);
> + else
> + DRM_DEBUG_ATOMIC("[MSTB:%p] Checking bandwidth limits\n",
> + mstb);
>
> - if (drm_dp_mst_port_downstream_of_branch(vcpi->port, branch))
> - pbn_used += vcpi->pbn;
> + list_for_each_entry(port, &mstb->ports, next) {
> + ret = drm_dp_mst_atomic_check_port_bw_limit(port, state);
> + if (ret < 0)
> + return ret;
> +
> + pbn_used += ret;
> }
> - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] branch used %d PBN\n",
> - branch, pbn_used);
>
> - if (pbn_used > pbn_limit) {
> - DRM_DEBUG_ATOMIC("[MST BRANCH:%p] No available bandwidth\n",
> - branch);
> + return pbn_used;
> +}
> +
> +static int
> +drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port,
> + struct drm_dp_mst_topology_state *state)
> +{
> + struct drm_dp_vcpi_allocation *vcpi;
> + int pbn_used = 0;
> +
> + if (port->pdt == DP_PEER_DEVICE_NONE)
> + return 0;
> +
> + if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) {
> + bool found = false;
> +
> + list_for_each_entry(vcpi, &state->vcpis, next) {
> + if (vcpi->port != port)
> + continue;
> + if (!vcpi->pbn)
> + return 0;
> +
> + found = true;
> + break;
> + }
> + if (!found)
> + return 0;
> +
> + /* This should never happen, as it means we tried to
> + * set a mode before querying the full_pbn
> + */
> + if (WARN_ON(!port->full_pbn))
> + return -EINVAL;
> +
> + pbn_used = vcpi->pbn;
> + } else {
> + pbn_used = drm_dp_mst_atomic_check_mstb_bw_limit(port->mstb,
> + state);
> + if (pbn_used <= 0)
> + return pbn_used;
> + }
> +
> + if (pbn_used > port->full_pbn) {
> + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] required PBN of %d exceeds port limit of %d\n",
> + port->parent, port, pbn_used,
> + port->full_pbn);
> return -ENOSPC;
> }
> - return 0;
> +
> + DRM_DEBUG_ATOMIC("[MSTB:%p] [MST PORT:%p] uses %d out of %d PBN\n",
> + port->parent, port, pbn_used, port->full_pbn);
> +
> + return pbn_used;
> }
>
> static inline int
> @@ -5073,9 +5134,15 @@ int drm_dp_mst_atomic_check(struct drm_atomic_state *state)
> ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr, mst_state);
> if (ret)
> break;
> - ret = drm_dp_mst_atomic_check_bw_limit(mgr->mst_primary, mst_state);
> - if (ret)
> +
> + mutex_lock(&mgr->lock);
> + ret = drm_dp_mst_atomic_check_mstb_bw_limit(mgr->mst_primary,
> + mst_state);
> + mutex_unlock(&mgr->lock);
> + if (ret < 0)
> break;
> + else
> + ret = 0;
> }
>
> return ret;
> --
> 2.24.1
>
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
More information about the dri-devel
mailing list