Why is Thunderbolt 3 limited to 2.5 GT/s on Linux?

Timur Kristóf timur.kristof at gmail.com
Thu Jul 18 14:38:35 UTC 2019


> > 
> > I took a look at amdgpu_device_get_pcie_info() and found that it
> > uses
> > pcie_bandwidth_available to determine the capabilities of the PCIe
> > port. However, pcie_bandwidth_available gives you only the current
> > bandwidth as set by the PCIe link status register, not the maximum
> > capability.
> > 
> > I think something along these lines would fix it:
> > https://pastebin.com/LscEMKMc
> > 
> > It seems to me that the PCIe capabilities are only used in a few
> > places
> > in the code, so this patch fixes pp_dpm_pcie. However it doesn't
> > affect
> > the actual performance.
> > 
> > What do you think?
> 
> I think we want the current bandwidth.  The GPU can only control the
> speed of its local link.  If there are upstream links that are slower
> than its local link, it doesn't make sense to run the local link at
> faster speeds because it will burn extra power it will just run into
> a
> bottleneck at the next link.  In general, most systems negotiate the
> fastest link speed supported by both ends at power up.
> 
> Alex

Currently, if the GPU connected to a TB3 port, the driver thinks that
2.5 GT/s is the best speed that it can use, even though the hardware
itself uses 8 GT/s. So what the driver thinks is inconsistent with what
the hardware does. This messes up pp_dpm_pcie.

As far as I understand, PCIe bridge devices can change their link speed
in runtime based on how they are used or what power state they are in,
so it makes sense here to request the best speed they are capable of. I
might be wrong about that.

If you think this change is undesireable, then maybe it would be worth
to follow Mika's suggestion and add something along the lines of
dev->is_thunderbolt so that the correct available bandwidth could still
be determined.

Tim



More information about the dri-devel mailing list