Do we really need to increase/decrease MST VC payloads?

Lin, Wayne Wayne.Lin at amd.com
Wed May 11 04:14:02 UTC 2022


[AMD Official Use Only - General]



> -----Original Message-----
> From: Lyude Paul <lyude at redhat.com>
> Sent: Wednesday, May 11, 2022 6:14 AM
> To: Lin, Wayne <Wayne.Lin at amd.com>; Zuo, Jerry <Jerry.Zuo at amd.com>;
> Wentland, Harry <Harry.Wentland at amd.com>; amd-
> gfx at lists.freedesktop.org
> Subject: Re: Do we really need to increase/decrease MST VC payloads?
> 
> On Thu, 2022-05-05 at 10:02 +0000, Lin, Wayne wrote:
> >
> > I think this shouldn't happen. Please refer to the figure 2-98 of DP
> > spec 1.4.
> > The payload id table should be changed to below I think.
> >
> >         |1 1 1 2 3 3 X X|
> >
> > I skimmed over a bit of the implementation in drm. I think it should
> > work fine except that we have to revise function drm_dp_init_vcpi() a
> > bit.
> > Since increasing/decreasing time slots should still use the same
> > payload id, we should add logic to skip trying to assign a new payload
> > id under this case.
> 
> I'm not sure about that tbh, since the example you gave also still mentions
> VC
> #1 which is supposed to be disabled here.
> 
> FWIW, I was alerted to this pecularity when I noticed Figure 2-93 from
> chapter
> 2.6.5 of the DP 2.0 spec. Notice that on the diagram it mentions time slots 1-
> 20 allocated to VC1, time slots 21-32 being allocated to VC2, and then
> (confusingly in a different color) mentions that time slots 33-47 are allocated
> to VC1 again.
> 
Thank you Lyude.

I think that should be just a typo since remaining 2 payload portions of the
"De-allocation" case at the bottom of that figure are represented in different
colors. Besides, I think we should just take the idea that it illustrates in the figure
for increasing/decreasing time lots. Which only increase extra slots continuously.  

I also consult with a branch device vendor, their DP Rx HW design only supports
1 start/end per stream. We have consistent consensus for this. However, the
most confident way is having VESA to clarify this.

> At first I honestly thought this was a typo in the spec, but it's done multiple
> times and it _does_ actually make sense when you think about it for a little
> while. Consider that the advantage of having VC allocations work this way
> would be that you're able to increase the time slots allocated to a VC without
> having to even change their start slots - meaning those other payloads don't
> need to be disrupted in any way. This is only as far as I can tell if you have
> non-contiguous payloads.

Ideally, other payloads shouldn't get disrupted. Source will send out ACT symbols
to synchronize DP Rx with the new updated payload ID table. Operations on
DPCD 0x1C1 & 0x1C2 won't have physical signal take effect immediately. 

I think increasing time slots non-contiguously will have DP Rx to have more HW
registers and have more complicated design which will increase their cost.
> 
> >
> > > >
> > > >         |1 1 2 3 3 1 X X|
> > > >
> > > > Now, let's say we increase payload #2 to 2 slots:
> > > >
> > > >         |1 1 2 3 3 1 2 X|
> > > >
> > > > Surprise - the monitor for payload #1 was unplugged, so we need to
> > > > remove it's payload. Note the MST spec doesn't allow holes between
> > > > allocations, and makes branches repsonsible for automatically
> > > > moving payloads to fill in the gaps which we have to keep track of
> > > > locally.
> > > > Normally the handling of holes would be fine, as our current
> > > > payload management code loops through each payload to fill in any
> > > > holes. But these payloads aren't contiguous and we only kept track
> > > > of their start slots and total slot counts. So we would end up
> > > > thinking we now have a VC table like
> > > > this:
> > > >
> > > >         |2 2 3 3 X X X X|
> > > >
> > > > But that's wrong, in reality the table will look like this on the
> > > > MST
> > > > hub:
> > > >
> > > >         |2 3 3 2 X X X X|
> > > >
> > > > Now things are broken, because we now have the wrong start slot
> > > > for payload #3.
> > > >
> > > > Just figured I'd clarify :). if anyone is curious, I ended up
> > > > fixing this by adding a bitmask of assigned VC slots to our atomic
> > > > payload struct - and using that to keep track of the current start
> > > > slots for each payload. Since we have a max of 64 VC slots which
> > > > fits into a u64, this works rather elegantly!
> > > >
> > > > >
> > > > > I'd /very/ much like to just disable this behavior for the time
> > > > > being (not the whole commit for DP 2.0 support since that'd be
> > > > > unreasonable, just the increase/decrease payload changes), and
> > > > > eventually have someone just implement this the proper way by
> > > > > adding support for this into the MST helpers and teaching them
> > > > > how to handle non-contiguous payloads. I'm happy to leave as
> > > > > much of the code intact as possible (maybe with #if 0 or
> > > > > whatever), and ideally just add some code somewhere to avoid
> > > > > increasing/decreasing payloads without a full modeset.
> > > > >
> > > > > FWIW, the WIP of my atomic MST work is here:
> > > > >
> > > > >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%
> > > > > 2Fgitl
> > > > > ab.freedesktop.org%2Flyudess%2Flinux%2F-
> %2Fcommits%2Fwip%2Fmst-
> > > > atomic-
> > > > > only-
> > > >
> v1&data=05%7C01%7Cjerry.zuo%40amd.com%7Cf669121b53414c0dbc9
> > > > 40
> > > > >
> > > >
> 8da2e137e85%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6378
> > > > 729585141
> > > > >
> > > >
> 60092%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> u
> > > > MzIiLCJB
> > > > >
> > > >
> TiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=IBn%2BF5
> > > > 0r9WIeUfG
> > > > > 9MUGStbACr5Kolu3PB5K0dyiiYwg%3D&reserved=0
> > > > >
> > > > > I already have i915 and nouveau working with these changes JFYI.
> > > >
> > > > --
> > > > Cheers,
> > > >  Lyude Paul (she/her)
> > > >  Software Engineer at Red Hat
> >
> > --
> > Regards,
> > Wayne Lin
> >
> 
> --
> Cheers,
>  Lyude Paul (she/her)
>  Software Engineer at Red Hat
--
Regards,
Wayne Lin


More information about the amd-gfx mailing list