[PATCH 01/15] drm/amdgpu: Add encoder atomic check
Lyude Paul
lyude at redhat.com
Thu Sep 19 23:37:31 UTC 2019
Ok, so reviewing this is kind of difficult because this series doesn't apply
to drm-tip, and also doesn't make any mention of what branch it's supposed to
apply to. So there's no way for me to apply any of these changes in my tree to
get an idea of how things look overall with these patches applied. Could you
please base the next series on top of drm-tip?
In the mean time, I've left as much feedback as I can below:
For starters, this patch should be combined with the next patch in the series
since you really can't use drm_dp_atomic_find_vcpi_slots() without using
drm_dp_atomic_release_vcpi_slots(). More down below
On Wed, 2019-09-18 at 16:26 -0400, mikita.lipski at amd.com wrote:
> From: Mikita Lipski <mikita.lipski at amd.com>
>
> [why]
> In order to comply with new MST atomic check
> we have to find and add VCPI slots to the state
> during atomic check whenever their is a change on
> mode or connector.
> [how]
> - Verify that it is a MST connection
> - Convert new stream's clock and bpp
> - Calculate PBN based on stream parameters
> - Find and add VCPI slots to the state
>
> Cc: Lyude Paul <lyude at redhat.com>
> Signed-off-by: Mikita Lipski <mikita.lipski at amd.com>
> ---
> .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 59 +++++++++++++++++++
> 1 file changed, 59 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 7b0ca2e1ed8b..d700b962d338 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -4432,6 +4432,65 @@ static int dm_encoder_helper_atomic_check(struct
> drm_encoder *encoder,
> struct drm_crtc_state *crtc_state,
> struct drm_connector_state
> *conn_state)
> {
> + struct drm_atomic_state *state = crtc_state->state;
> + struct drm_connector *connector = conn_state->connector;
> + struct amdgpu_dm_connector *aconnector =
> to_amdgpu_dm_connector(connector);
> + struct dm_crtc_state *dm_new_crtc_state =
> to_dm_crtc_state(crtc_state);
> + const struct drm_display_mode *adjusted_mode = &crtc_state-
> >adjusted_mode;
> + struct drm_dp_mst_topology_mgr *mst_mgr;
> + struct drm_dp_mst_port *mst_port;
> + int pbn, slots,clock, bpp = 0;
> +
> + if (!dm_new_crtc_state)
> + return 0;
> +
> + if (!aconnector || !aconnector->port)
> + return 0;
...in what situation would aconnector ever be NULL? Same for aconnector->port,
the drm_dp_mst_port stays around until the drm_connector is destroyed.
> +
> + mst_port = aconnector->port;
> + mst_mgr = &aconnector->mst_port->mst_mgr;
> +
> + if (!mst_mgr->mst_state)
> + return 0;
I'm confused here, what is the purpose of this check? mst_mgr->mst_state
shouldn't be touched directly ever, as it refers to the currently committed
mgr state. the way to retrieve the mgr's atomic state is:
https://01.org/linuxgraphics/gfx-docs/drm/gpu/drm-kms-helpers.html#c.drm_atomic_get_mst_topology_state
Even then though, there's no actual need to grab the mst_state here manually.
drm_dp_atomic_find_vcpi_slots() does this for you
> +
> + if (!crtc_state->connectors_changed && !crtc_state->mode_changed)
> + return 0;
> +
> + switch (convert_color_depth_from_display_info(connector, conn_state))
> {
> + case COLOR_DEPTH_666:
> + bpp = 6;
> + break;
> + case COLOR_DEPTH_888:
> + bpp = 8;
> + break;
> + case COLOR_DEPTH_101010:
> + bpp = 10;
> + break;
> + case COLOR_DEPTH_121212:
> + bpp = 12;
> + break;
> + case COLOR_DEPTH_141414:
> + bpp = 14;
> + break;
> + case COLOR_DEPTH_161616:
> + bpp = 16;
> + break;
> + default:
> + ASSERT(bpp != 0);
> + break;
> + }
> +
> + bpp *= 3;
> + clock = adjusted_mode->clock;
> + pbn = drm_dp_calc_pbn_mode(clock, bpp);
I might have forgotten to mention this in the documentation (I'll double check
after sending this out and fix it if I did...), but it's a good idea to only
recalculate the PBN when !drm_atomic_state->duplicated. The reason for this is
that when we're resuming the system from S3, there's a chance that MST
topologies which were previously connected to the system might no longer be
present or may have changed.
When resuming from S3, or any other situation where the GPU needs to save and
restore it's atomic state, the general expectation is that the state that was
saved is basically identical to the state that was resumed since a different
state won't have the guarantee of passing the atomic check. An example
scenario (note that we don't reprobe topologies after resume yet, but we will
be doing this very soon):
* MST connector DP-4 starts off with a bpp of 8
* The GPU has a VCPI table that is currently full, and includes an active
allocation for DP-4.
* The system goes into S3, and the GPU is suspended and the atomic state
saved (this sets state->duplicated to true)
* While the system is still in S3, the user plugs in a monitor that is
identical, except for the fact it has 16bpp
* The system resumes, and the GPU begins resuming. amdgpu goes to restore the
atomic state from before entering S3
* Because DP-4 now has a bpp of 16, we use that instead of the previous 8bpp
from before and end up with a VCPI allocation that is twice as large as
before
* The new twice-as-large VCPI allocation does not fit along with the rest of
the allocations, and causes the atomic_check during resume to fail (note
that during resume, it's expected that the driver does whatever it takes to
make the atomic_check pass)
Copying the PBN value as-is from the duplicated atomic state when state-
>duplicated == true instead of recalculating it allows us to avoid this by
making sure that regardless of changes with the connector, we still end up
restoring the same VCPI allocations that we had before.
Note-this does potentially leave us with the possibility that we could in
fact, resume and end up restoring an atomic state with an incorrect VCPI
table. I don't think we've really figured out any way around this, and with
suspend/resume it's expected that the driver is able to cope with breakages
like this until another atomic modeset happens to correct things. So for now,
that's OK.
I'd really recommend looking at nouveau's implementation of this as an
example, specifically nv50_msto_atomic_check().
> + slots = drm_dp_atomic_find_vcpi_slots(state,
> + mst_mgr,
> + mst_port,
> + pbn);
> + if (slots < 0) {
> + DRM_DEBUG_KMS("failed finding vcpi slots:%d\n", slots);
I think you meant DRM_DEBUG_ATOMIC() here
Also, this is still a big step in the right direction but there's a bit more
missing here. This code only checks if slots != 0 and then drops the value.
Later, dm_helpers_dp_mst_write_payload_allocation_table() calculates the VCPI
allocation that actually gets used in the commit in the atomic commit using
the deprecated drm_dp_find_vcpi_slots() helper.
What we want here instead is to just drop the usage of
drm_dp_find_vcpi_slots() entirely, store the VCPI slot allocation returned by
drm_dp_atomic_find_vcpi_slots() in the driver's atomic state, then use that
value when performing the actual VCPI allocation in
dm_helpers_dp_mst_write_payload_allocation_table() using
drm_dp_mst_allocate_vcpi(). Maybe struct dc_stream_state is a good place to
add this? (feel free to store the pbn as well there, to save a redundant
drm_dp_calc_pbn_mode() call).
> + return slots;
> + }
> return 0;
> }
>
--
Cheers,
Lyude Paul
More information about the amd-gfx
mailing list