[PATCH] drm/amd/powerplay: add override pcie parameters for Vega20 (v2)

Alex Deucher alexdeucher at gmail.com
Mon Feb 4 18:53:13 UTC 2019


On Mon, Feb 4, 2019 at 12:55 PM Kasiviswanathan, Harish
<Harish.Kasiviswanathan at amd.com> wrote:
>
> Hi Alex,
>
> I already have that patch in my code. The function amdgpu_device_get_pcie_info() updates platform caps (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN*) based on pcie_get_speed_cap() of the bus AMD GPU is connected. I think to get the system caps going up just one level in PCI hierarchy is not sufficient. It should be check all the way up to the root. Like the function pcie_bandwidth_available(). See https://elixir.bootlin.com/linux/latest/source/drivers/pci/pci.c#L5503

Nevermind that patch. It only affected gen1 and gen2 platforms, so
irrelevant for this discussion.  I'm not a pcie expert, but I think
looking at the next bridge up should be sufficient.  We've been doing
this for years already for smu7 (polaris, fiji, etc.) and smu9
(vega10, 12) devices and it's worked correctly.
pcie_bandwidth_available() walks the entire tree because it wants to
know the slowest link in the hierarchy which would impact performance
even if downstream links have higher bandwidth.

>
> For e.g. In my machine the hierarchy looks as shown below. The current code only looks at the top 2 and decides incorrectly as Gen 4.
>
> amdgpu 0000:07:00.0: lnksta:1103 lnkcap2:180001e lnkcap:400d04   -- PCIE_SPEED_16_0GT - Gen 4
> pcieport 0000:06:00.0: lnksta:7103 lnkcap2:180001e lnkcap:700d04   -- PCIE_SPEED_16_0GT - Gen 4
> pcieport 0000:05:00.0: lnksta:1103 lnkcap2:180001e lnkcap:10473904 -- PCIE_SPEED_8_0GT - Gen 3
> pcieport 0000:04:10.0: lnksta:103 lnkcap2:10e lnkcap:10416103 -- PCIE_SPEED_8_0GT - Gen 3
> pcieport 0000:03:00.0: lnksta:103 lnkcap2:10e lnkcap:416103 -- PCIE_SPEED_8_0GT - Gen 3
> pcieport 0000:00:03.0: lnksta:7103 lnkcap2:e lnkcap:77a3103 -- PCIE_SPEED_8_0GT - Gen 3

This seems wrong.  Is 0000:06:00.0 a gen 4 bridge that is part of the
board design or something?  Maybe we need to make sure we go up the
first bridge level that is not part of board configuration?  That
said, if the bridge supports gen4, it should work even if down
upstream link has less bandwidth.

Alex

>
>
> Best Regards,
> Harish
>
>
>
>
>
> From: Alex Deucher <alexdeucher at gmail.com>
> Sent: Friday, February 1, 2019 5:53 PM
> To: Kasiviswanathan, Harish
> Cc: Li, Colin; Xi, Sheldon; amd-gfx at lists.freedesktop.org
> Subject: Re: [PATCH] drm/amd/powerplay: add override pcie parameters for Vega20 (v2)
>
>
>
>
>
>
>
>
>
> Right.  we have the max gen and width supported by the card and the platform in already in the driver thanks to amdgpu_device_get_pcie_info() so we just need to use the platform gen caps to limit the max pcie gen in dpm1.  You don't want to use the current  status because that might have been changed by sw at some point.  The asic caps are defined by CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN* while the platform caps are defined by CAIL_PCIE_LINK_SPEED_SUPPORT_GEN*.  I think the original code was correct.  What you  may have been running into was a bug in pcie_get_speed_cap() that recently got fixed:
>
>
> commit f1f90e254e46e0a14220e4090041f68256fbe297
> Author: Mikulas Patocka <mpatocka at redhat.com>
> Date:   Mon Nov 26 10:37:13 2018 -0600
>
>     PCI: Fix incorrect value returned from pcie_get_speed_cap()
>
>     The macros PCI_EXP_LNKCAP_SLS_*GB are values, not bit masks.  We must mask
>     the register and compare it against them.
>
>     This fixes errors like this:
>
>       amdgpu: [powerplay] failed to send message 261 ret is 0
>
>     when a PCIe-v3 card is plugged into a PCIe-v1 slot, because the slot is
>     being incorrectly reported as PCIe-v3 capable.
>
>     6cf57be0f78e, which appeared in v4.17, added pcie_get_speed_cap() with the
>     incorrect test of PCI_EXP_LNKCAP_SLS as a bitmask.  5d9a63304032, which
>     appeared in v4.19, changed amdgpu to use pcie_get_speed_cap(), so the
>     amdgpu bug reports below are regressions in v4.19.
>
>     Fixes: 6cf57be0f78e ("PCI: Add pcie_get_speed_cap() to find max supported link speed")
>     Fixes: 5d9a63304032 ("drm/amdgpu: use pcie functions for link width and speed")
>     Link: https://bugs.freedesktop.org/show_bug.cgi?id=108704
>     Link: https://bugs.freedesktop.org/show_bug.cgi?id=108778
>     Signed-off-by: Mikulas Patocka <mpatocka at redhat.com>
>     [bhelgaas: update comment, remove use of PCI_EXP_LNKCAP_SLS_8_0GB and
>     PCI_EXP_LNKCAP_SLS_16_0GB since those should be covered by PCI_EXP_LNKCAP2,
>     remove test of PCI_EXP_LNKCAP for zero, since that register is required]
>     Signed-off-by: Bjorn Helgaas <bhelgaas at google.com>
>     Acked-by: Alex Deucher <alexander.deucher at amd.com>
>     Cc: stable at vger.kernel.org      # v4.17+
>
>
>
>
> On Fri, Feb 1, 2019 at 5:43 PM Kasiviswanathan, Harish <Harish.Kasiviswanathan at amd.com> wrote:
>
>
> Let me explain the problem. I also adding Colin & Sheldon from the SMU team.
>
> Currently, PPTable entries (inside VBIOS) reflects ASIC capabilities and not the system capabilities. So if we put Gen4 capable Vega20 in a system that supports  only Gen3, the PPTable has Gen4 for DPM 1 (higher DPM state). Later on when SMU wants to switch to DPM1 it fails since Gen4 is not supported. SMU however, doesn't fall back to Gen3 but instead to Gen0. This is causing a performance issue.
>
> As I understand, this override is required only for the higher DPM state.
>
>
>
> Best Regards,
> Harish
>
>
>
> From: Alex Deucher <alexdeucher at gmail.com>
> Sent: Friday, February 1, 2019 5:34 PM
> To: Kasiviswanathan, Harish
> Cc: amd-gfx at lists.freedesktop.org
> Subject: Re: [PATCH] drm/amd/powerplay: add override pcie parameters for Vega20 (v2)
>
>
>
> Do we actually need the current status?  We have the caps of the platform and the device.  For the lowest dpm state, we generally want the lowest common gen and then the highest common gen for the highest dpm state.  If we use the current status, we will  mostly likely end up with the highest gen in the lowest dpm state because generally the highest gen is negotiated at power up.
>
>
> Alex
>
>
>
> On Fri, Feb 1, 2019 at 5:28 PM Kasiviswanathan, Harish <Harish.Kasiviswanathan at amd.com>  wrote:
>
>
> Thanks Alex. I saw that function. It gets caps. The function requires link status (PCI_EXP_LNKSTA), the value negotiated.
>
> > I'd rather not access pci config space from the powerplay code because it doesn't work under virtualization.
>
> [HK] I will then modify amdgpu_device_get_pcie_info() to read "PCI_EXP_LINKSTA" and store that information for future use.
>
>
> Best Regards,
> Harish
>
>
>
>
>
> From: Alex Deucher <alexdeucher at gmail.com>
> Sent: Friday, February 1, 2019 5:09 PM
> To: Kasiviswanathan, Harish
> Cc:  amd-gfx at lists.freedesktop.org
> Subject: Re: [PATCH] drm/amd/powerplay: add override pcie parameters for Vega20 (v2)
>
>
> On Fri, Feb 1, 2019 at 4:17 PM Kasiviswanathan, Harish
> <Harish.Kasiviswanathan at amd.com> wrote:
> >
> > v2: Use PCIe link status instead of link capability
> >     Send override message after SMU enable features
> >
> > Change-Id: Iea2a1ac595cf63a9528ff1e9a6955d14d8c3a6d5
> > Signed-off-by: Harish Kasiviswanathan <Harish.Kasiviswanathan at amd.com>
> > ---
> >  drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c | 93 +++++++++++++++-------
> >  1 file changed, 64 insertions(+), 29 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
> > index da022ca..d166f8c 100644
> > --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
> > +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
> > @@ -771,40 +771,75 @@ static int vega20_init_smc_table(struct pp_hwmgr *hwmgr)
> >         return 0;
> >  }
> >
> > +/*
> > + * Override PCIe link speed and link width for DPM Level 1. PPTable entries
> > + * reflect the ASIC capabilities and not the system capabilities. For e.g.
> > + * Vega20 board in a PCI Gen3 system. In this case, when SMU's tries to switch
> > + * to DPM1, it fails as system doesn't support Gen4.
> > + */
> >  static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
> >  {
> >         struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
> > -       uint32_t pcie_speed = 0, pcie_width = 0, pcie_arg;
> > +       uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg;
> >         int ret;
> > +       uint16_t lnkstat;
> > +
> > +       pcie_capability_read_word(adev->pdev, PCI_EXP_LNKSTA, &lnkstat);
> >
> > -       if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
> > -               pcie_speed = 16;
> > -       else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
> > -               pcie_speed = 8;
> > -       else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
> > -               pcie_speed = 5;
> > -       else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
> > -               pcie_speed = 2;
> > -
> > -       if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X32)
> > -               pcie_width = 32;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
> > -               pcie_width = 16;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
> > -               pcie_width = 12;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
> > -               pcie_width = 8;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
> > +       switch (lnkstat & PCI_EXP_LNKSTA_CLS) {
> > +       case PCI_EXP_LNKSTA_CLS_2_5GB:
> > +               pcie_gen = 0;
> > +               break;
> > +       case PCI_EXP_LNKSTA_CLS_5_0GB:
> > +               pcie_gen = 1;
> > +               break;
> > +       case PCI_EXP_LNKSTA_CLS_8_0GB:
> > +               pcie_gen = 2;
> > +               break;
> > +       case PCI_EXP_LNKSTA_CLS_16_0GB:
> > +               pcie_gen = 3;
> > +               break;
> > +       default:
> > +               pr_warn("Unknown PCI link speed %x\n",
> > +                       lnkstat & PCI_EXP_LNKSTA_CLS);
> > +       }
> > +
> > +
> > +       switch ((lnkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT) {
> > +       case 32:
> > +               pcie_width = 7;
> > +               break;
> > +       case 16:
> > +               pcie_width = 6;
> > +               break;
> > +       case 12:
> > +               pcie_width = 5;
> > +               break;
> > +       case 8:
> >                 pcie_width = 4;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
> > +               break;
> > +       case 4:
> > +               pcie_width = 3;
> > +               break;
> > +       case 2:
> >                 pcie_width = 2;
> > -       else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
> > +               break;
> > +       case 1:
> >                 pcie_width = 1;
> > +               break;
> > +       default:
> > +               pr_warn("Unknown PCI link width %x\n",
> > +                       lnkstat & PCI_EXP_LNKSTA_CLS);
> > +       }
> >
>
> We already get the link caps for both the device and the platform in
> amdgpu_device_get_pcie_info().  Is that info not adequate?  I'd rather
> not access pci config space from the powerplay code because it doesn't
> work under virtualization.
>
> Alex
>
> > -       pcie_arg = pcie_width | (pcie_speed << 8);
> > -
> > +       /* Bit 31:16: LCLK DPM level. 0 is DPM0, and 1 is DPM1
> > +        * Bit 15:8:  PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
> > +        * Bit 7:0:   PCIE lane width, 1 to 7 corresponds is x1 to x32
> > +        */
> > +       smu_pcie_arg = (1 << 16) | (pcie_gen << 8) | pcie_width;
> >         ret = smum_send_msg_to_smc_with_parameter(hwmgr,
> > -                       PPSMC_MSG_OverridePcieParameters, pcie_arg);
> > +                       PPSMC_MSG_OverridePcieParameters, smu_pcie_arg);
> > +
> >         PP_ASSERT_WITH_CODE(!ret,
> >                 "[OverridePcieParameters] Attempt to override pcie params failed!",
> >                 return ret);
> > @@ -1611,11 +1646,6 @@ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
> >                         "[EnableDPMTasks] Failed to initialize SMC table!",
> >                         return result);
> >
> > -       result = vega20_override_pcie_parameters(hwmgr);
> > -       PP_ASSERT_WITH_CODE(!result,
> > -                       "[EnableDPMTasks] Failed to override pcie parameters!",
> > -                       return result);
> > -
> >         result = vega20_run_btc(hwmgr);
> >         PP_ASSERT_WITH_CODE(!result,
> >                         "[EnableDPMTasks] Failed to run btc!",
> > @@ -1631,6 +1661,11 @@ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
> >                         "[EnableDPMTasks] Failed to enable all smu features!",
> >                         return result);
> >
> > +       result = vega20_override_pcie_parameters(hwmgr);
> > +       PP_ASSERT_WITH_CODE(!result,
> > +                       "[EnableDPMTasks] Failed to override pcie parameters!",
> > +                       return result);
> > +
> >         result = vega20_notify_smc_display_change(hwmgr);
> >         PP_ASSERT_WITH_CODE(!result,
> >                         "[EnableDPMTasks] Failed to notify smc display change!",
> > --
> > 2.7.4
> >
> > _______________________________________________
> > amd-gfx mailing list
> >  amd-gfx at lists.freedesktop.org
> >  https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
> amd-gfx Info Page - freedesktop.org
> lists.freedesktop.org
> To see the collection of prior postings to the list, visit the amd-gfx Archives.. Using amd-gfx: To post a message to all the list members, send email to  amd-gfx at lists.freedesktop.org. You can subscribe to the list, or change your existing subscription, in the sections below.
>


More information about the amd-gfx mailing list