[PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges

Karol Herbst kherbst at redhat.com
Thu Nov 21 12:56:57 UTC 2019


On Thu, Nov 21, 2019 at 1:52 PM Mika Westerberg
<mika.westerberg at intel.com> wrote:
>
> On Thu, Nov 21, 2019 at 01:46:14PM +0200, Mika Westerberg wrote:
> > On Thu, Nov 21, 2019 at 12:34:22PM +0100, Rafael J. Wysocki wrote:
> > > On Thu, Nov 21, 2019 at 12:28 PM Mika Westerberg
> > > <mika.westerberg at intel.com> wrote:
> > > >
> > > > On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:
> > > > > > last week or so I found systems where the GPU was under the "PCI
> > > > > > Express Root Port" (name from lspci) and on those systems all of that
> > > > > > seems to work. So I am wondering if it's indeed just the 0x1901 one,
> > > > > > which also explains Mikas case that Thunderbolt stuff works as devices
> > > > > > never get populated under this particular bridge controller, but under
> > > > > > those "Root Port"s
> > > > >
> > > > > It always is a PCIe port, but its location within the SoC may matter.
> > > >
> > > > Exactly. Intel hardware has PCIe ports on CPU side (these are called
> > > > PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is
> > > > still the same.
> > > >
> > > > > Also some custom AML-based power management is involved and that may
> > > > > be making specific assumptions on the configuration of the SoC and the
> > > > > GPU at the time of its invocation which unfortunately are not known to
> > > > > us.
> > > > >
> > > > > However, it looks like the AML invoked to power down the GPU from
> > > > > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at
> > > > > that point, so it looks like that AML tries to access device memory on
> > > > > the GPU (beyond the PCI config space) or similar which is not
> > > > > accessible in PCI power states below D0.
> > > >
> > > > Or the PCI config space of the GPU when the parent root port is in D3hot
> > > > (as it is the case here). Also then the GPU config space is not
> > > > accessible.
> > >
> > > Why would the parent port be in D3hot at that point?  Wouldn't that be
> > > a suspend ordering violation?
> >
> > No. We put the GPU into D3hot first, then the root port and then turn
> > off the power resource (which is attached to the root port) resulting
> > the topology entering D3cold.
>
> I don't see that happening in the AML though.
>
> Basically the difference is that when Windows 7 or Linux (the _REV==5
> check) then we directly do link disable whereas in Windows 8+ we invoke
> LKDS() method that puts the link into L2/L3. None of the fields they
> access seem to touch the GPU itself.
>
> LKDS() for the first PEG port looks like this:
>
>    P0L2 = One
>    Sleep (0x10)
>    Local0 = Zero
>    While (P0L2)
>    {
>         If ((Local0 > 0x04))
>         {
>             Break
>         }
>
>         Sleep (0x10)
>         Local0++
>    }
>
> One thing that comes to mind is that the loop can end even if P0L2 is
> not cleared as it does only 5 iterations with 16 ms sleep between. Maybe
> Sleep() is implemented differently in Windows? I mean Linux may be
> "faster" here and return prematurely and if we leave the port into D0
> this does not happen, or something. I'm just throwing out ideas :)
>

keep in mind, that I am able to hit this bug with my python script:
https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py



More information about the dri-devel mailing list