[PATCH v5 05/11] drm/amdgpu: Use RMW accessors for changing LNKCTL
Ilpo Järvinen
ilpo.jarvinen at linux.intel.com
Fri Jul 21 08:07:26 UTC 2023
On Thu, 20 Jul 2023, Bjorn Helgaas wrote:
> On Mon, Jul 17, 2023 at 03:04:57PM +0300, Ilpo Järvinen wrote:
> > Don't assume that only the driver would be accessing LNKCTL. ASPM
> > policy changes can trigger write to LNKCTL outside of driver's control.
> > And in the case of upstream bridge, the driver does not even own the
> > device it's changing the registers for.
> >
> > Use RMW capability accessors which do proper locking to avoid losing
> > concurrent updates to the register value.
> >
> > Fixes: a2e73f56fa62 ("drm/amdgpu: Add support for CIK parts")
> > Fixes: 62a37553414a ("drm/amdgpu: add si implementation v10")
> > Suggested-by: Lukas Wunner <lukas at wunner.de>
> > Signed-off-by: Ilpo Järvinen <ilpo.jarvinen at linux.intel.com>
> > Cc: stable at vger.kernel.org
>
> Do we have any reports of problems that are fixed by this patch (or by
> others in the series)? If not, I'm not sure it really fits the usual
> stable kernel criteria:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/stable-kernel-rules.rst?id=v6.4
I was on the edge with this. The answer to your direct question is no,
there are no such reports so it would be okay to leave stable out I think.
This applies to all patches in this series.
Basically, this series came to be after Lukas noted the potential
concurrency issues with how LNKCTL is unprotected when reviewing
(internally) my bandwidth controller series. Then I went to look around
all LNKCTL usage and realized existing things might alreary have similar
issues.
Do you want me to send another version w/o cc stable or you'll take care
of that?
> > ---
> > drivers/gpu/drm/amd/amdgpu/cik.c | 36 +++++++++-----------------------
> > drivers/gpu/drm/amd/amdgpu/si.c | 36 +++++++++-----------------------
> > 2 files changed, 20 insertions(+), 52 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
> > index 5641cf05d856..e63abdf52b6c 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/cik.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/cik.c
> > @@ -1574,17 +1574,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
> > u16 bridge_cfg2, gpu_cfg2;
> > u32 max_lw, current_lw, tmp;
> >
> > - pcie_capability_read_word(root, PCI_EXP_LNKCTL,
> > - &bridge_cfg);
> > - pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
> > - &gpu_cfg);
> > -
> > - tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
> > - pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
> > -
> > - tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
> > - pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
> > - tmp16);
> > + pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
> > + pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
> >
> > tmp = RREG32_PCIE(ixPCIE_LC_STATUS1);
> > max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >>
> > @@ -1637,21 +1628,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
> > msleep(100);
> >
> > /* linkctl */
> > - pcie_capability_read_word(root, PCI_EXP_LNKCTL,
> > - &tmp16);
> > - tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
> > - tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
> > - pcie_capability_write_word(root, PCI_EXP_LNKCTL,
> > - tmp16);
> > -
> > - pcie_capability_read_word(adev->pdev,
> > - PCI_EXP_LNKCTL,
> > - &tmp16);
> > - tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
> > - tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
> > - pcie_capability_write_word(adev->pdev,
> > - PCI_EXP_LNKCTL,
> > - tmp16);
> > + pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
> > + PCI_EXP_LNKCTL_HAWD,
> > + bridge_cfg &
> > + PCI_EXP_LNKCTL_HAWD);
> > + pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
> > + PCI_EXP_LNKCTL_HAWD,
> > + gpu_cfg &
> > + PCI_EXP_LNKCTL_HAWD);
>
> Wow, there's a lot of pointless-looking work going on here:
>
> set root PCI_EXP_LNKCTL_HAWD
> set GPU PCI_EXP_LNKCTL_HAWD
>
> for (i = 0; i < 10; i++) {
> read root PCI_EXP_LNKCTL
> read GPU PCI_EXP_LNKCTL
>
> clear root PCI_EXP_LNKCTL_HAWD
> if (root PCI_EXP_LNKCTL_HAWD was set)
> set root PCI_EXP_LNKCTL_HAWD
>
> clear GPU PCI_EXP_LNKCTL_HAWD
> if (GPU PCI_EXP_LNKCTL_HAWD was set)
> set GPU PCI_EXP_LNKCTL_HAWD
> }
>
> If it really *is* pointless, it would be nice to clean it up, but that
> wouldn't be material for this patch, so what you have looks good.
I really don't know if it's needed or not. There's stuff which looks hw
specific going on besides those things you point out and I've not really
understood what all that does.
One annoying thing is that this code has been copy-pasted to appear in
almost identical form in 4 files.
I agree it certainly looks there might be room for cleaning things up here
but such cleanups look a bit too scary to me w/o hw to test them.
> > /* linkctl2 */
> > pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
>
> The PCI_EXP_LNKCTL2 stuff also includes RMW updates. I don't see any
> uses of PCI_EXP_LNKCTL2 outside this driver that look relevant, so I
> guess we don't care about making the PCI_EXP_LNKCTL2 updates atomic?
Currently no, which is why I left it out from this patchset.
It is going to change soon though as I intend to submit bandwidth
controller series after this series which will add RMW ops for LNKCTL2.
The LNKCTL2 RMW parts are now in that series rather than in this one.
After adding the bandwidth controller, this driver might be able to use
it instead of tweaking LNKCTL2 directly to alter PCIe link speed (but I
don't expect myself to be able to test these drivers and it feels too
risky to make such a change without testing it, unfortunately).
--
i.
More information about the dri-devel
mailing list