[Bug 215892] New: 6500XT [drm:amdgpu_dm_init.isra.0.cold [amdgpu]] *ERROR* Failed to register vline0 irq 30!

bugzilla-daemon at kernel.org bugzilla-daemon at kernel.org
Wed Apr 27 02:23:42 UTC 2022


https://bugzilla.kernel.org/show_bug.cgi?id=215892

            Bug ID: 215892
           Summary: 6500XT [drm:amdgpu_dm_init.isra.0.cold [amdgpu]]
                    *ERROR* Failed to register vline0 irq 30!
           Product: Drivers
           Version: 2.5
    Kernel Version: 5.18-rc4
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: Video(DRI - non Intel)
          Assignee: drivers_video-dri at kernel-bugs.osdl.org
          Reporter: ulatec at gmail.com
        Regression: No

Created attachment 300811
  --> https://bugzilla.kernel.org/attachment.cgi?id=300811&action=edit
New PowerColor board with chip that produces kernel errors

Hello!

This is my first time submitted a bug here. I apologize if I make any mistakes
here, but I am going to do my best to describe the efforts that I have gone
through to attempt to resolve this issue on my own. As well, I hope not to
overload with information, but just wish to help with skipping over the basic
questions.

I have numerous PowerColor RX 6500XT graphics cards, and all of them with a
specific chip package (picture attached) have the same issue. Any PowerColor RX
6500XT with 2152 printed at the top of the package, and "TFTB43.00" at the
bottom of the package suffers the same kernel errors. Previously (up until a
few weeks ago) PowerColor was shipping 6500XT boards with chips that were
stamped with 2146 and "TFAW62.T5" at the top and bottom of the package
respectively. Boards with those chips have zero kernel errors and work
flawlessly. As well, I have tested various 6500XT and 6400 boards from
different AIB partners of AMD and have not had any issues other than this
specific variant from PowerColor.


To be honest, I am not sure if the root of the problem is in pcieport or in
amdgpu, but the amdgpu error throws first. 

I have attached the full dmesg output but to save some time here are some
highlighted lines of issue:

[    5.506718] [drm:amdgpu_dm_init.isra.0.cold [amdgpu]] *ERROR* Failed to
register vline0 irq 30!
[   14.368915] pcieport 0000:01:00.0: can't change power state from D0 to D3hot
(config space inaccessible)
[   15.270778] pcieport 0000:01:00.0: can't change power state from D3cold to
D0 (config space inaccessible)
[   15.270799] pcieport 0000:02:00.0: can't change power state from D3cold to
D0 (config space inaccessible)
[   25.478689] pcieport 0000:01:00.0: can't change power state from D3cold to
D0 (config space inaccessible)
[   25.478696] pcieport 0000:02:00.0: can't change power state from D3cold to
D0 (config space inaccessible)
[   25.722619] amdgpu 0000:03:00.0: can't change power state from D3cold to D0
(config space inaccessible)
[   35.833714] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for VM flush hub: 0!
[   35.941450] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for sem acquire in VM flush!
[   36.048999] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for VM flush hub: 1!
[   36.156835] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for sem acquire in VM flush!
[   36.264770] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for VM flush hub: 1!
[   36.372616] [drm:gmc_v10_0_flush_vm_hub.constprop.0 [amdgpu]] *ERROR*
Timeout waiting for VM flush hub: 0!


What I have attempted so far:

Results were the same for the following kernels: 5.4.190, 5.10.111, 5.15.34,
5.17.4 and now 5.18-rc4.

Many different motherboards with varying chipsets (B250, H510, X370, B550).
Same result.

Enabling/Disabling clock gating, ASPM, extended synch control for PCIE. Same
result.

The problematic cards from PowerColor indeed do work in Windows without issue.
This leads me to believe that something may have changed with TUL's
implementation of the 6500XT from one production run to another. Hopefully
someone from the amdgpu team can help here.


To summarize, PowerColor's prior 6500XT production worked flawlessly with the
drivers in the mainline kernel. New production for some reason is no longer
usable. New cards work in Windows, but now throw the errors above. Not an
isolated issue of one card, as I have tested 12 identical ones with the same
chip and all have the same result regardless of motherboard, cpu, power,
kernel, OS, etc. Cards (6500XT and 6400s) from other partners have not had any
issues.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.


More information about the dri-devel mailing list