[Bug 65761] HD 7970M Hybrid - hangs and errors and rmmod causes crash
bugzilla-daemon at bugzilla.kernel.org
bugzilla-daemon at bugzilla.kernel.org
Fri Feb 7 06:52:14 PST 2014
https://bugzilla.kernel.org/show_bug.cgi?id=65761
--- Comment #33 from Christoph Haag <haagch.christoph at googlemail.com> ---
Created attachment 125111
--> https://bugzilla.kernel.org/attachment.cgi?id=125111&action=edit
possibly call chain that calls radeon stuff
@ Hohahiu
Do you mean those?
[ 8077.648324] [drm:si_dpm_set_power_state] *ERROR* si_set_sw_state failed
No, I don't see those at all.
By the way, I'm using linux 3.14-rc1 now with this patch:
https://bugzilla.kernel.org/attachment.cgi?id=124621&action=diff
But I have an awful lot of this block in dmesg, I guess from every time
starting X:
[Feb 6 13:39] [drm] Disabling audio 0 support
[ +0,000004] [drm] Disabling audio 1 support
[ +0,000002] [drm] Disabling audio 2 support
[ +0,000001] [drm] Disabling audio 3 support
[ +0,000001] [drm] Disabling audio 4 support
[ +0,000002] [drm] Disabling audio 5 support
[ +1,257992] [drm] probing gen 2 caps for device 8086:151 = 261ad01/e
[ +0,000005] [drm] PCIE gen 3 link speeds already enabled
[ +0,004127] [drm] PCIE GART of 1024M enabled (table at 0x0000000000478000).
[ +0,000102] radeon 0000:01:00.0: WB enabled
[ +0,000004] radeon 0000:01:00.0: fence driver on ring 0 use gpu addr
0x0000000080000c00 and cpu addr 0xffff8807fd9f8c00
[ +0,000003] radeon 0000:01:00.0: fence driver on ring 1 use gpu addr
0x0000000080000c04 and cpu addr 0xffff8807fd9f8c04
[ +0,000002] radeon 0000:01:00.0: fence driver on ring 2 use gpu addr
0x0000000080000c08 and cpu addr 0xffff8807fd9f8c08
[ +0,000002] radeon 0000:01:00.0: fence driver on ring 3 use gpu addr
0x0000000080000c0c and cpu addr 0xffff8807fd9f8c0c
[ +0,000002] radeon 0000:01:00.0: fence driver on ring 4 use gpu addr
0x0000000080000c10 and cpu addr 0xffff8807fd9f8c10
[ +0,000390] radeon 0000:01:00.0: fence driver on ring 5 use gpu addr
0x0000000000075a18 and cpu addr 0xffffc90008b35a18
[ +0,144839] [drm] ring test on 0 succeeded in 2 usecs
[ +0,000004] [drm] ring test on 1 succeeded in 1 usecs
[ +0,000003] [drm] ring test on 2 succeeded in 1 usecs
[ +0,000059] [drm] ring test on 3 succeeded in 2 usecs
[ +0,000007] [drm] ring test on 4 succeeded in 1 usecs
[ +0,187260] [drm] ring test on 5 succeeded in 2 usecs
[ +0,000004] [drm] UVD initialized successfully.
[ +0,001745] [drm] Enabling audio 0 support
[ +0,000001] [drm] Enabling audio 1 support
[ +0,000000] [drm] Enabling audio 2 support
[ +0,000001] [drm] Enabling audio 3 support
[ +0,000001] [drm] Enabling audio 4 support
[ +0,000001] [drm] Enabling audio 5 support
[ +0,000035] [drm] ib test on ring 0 succeeded in 0 usecs
[ +0,000029] [drm] ib test on ring 1 succeeded in 0 usecs
[ +0,000029] [drm] ib test on ring 2 succeeded in 0 usecs
[ +0,000019] [drm] ib test on ring 3 succeeded in 0 usecs
[ +0,000018] [drm] ib test on ring 4 succeeded in 0 usecs
[ +0,158619] [drm] ib test on ring 5 succeeded
But I wouldn't really thing that it's a problem. Just a bit bloat in the log.
Anyway, I have stared a bit longer at the callgraph and what I have attached
seems suspicious I think. There's a whole lot of calls throughout the radeon
driver but the kcachegrind gui is quite limited and the graph is very
convoluted at this point so it's not clear to me whether this is during normal
operation. I'm still kind of searching for a tool that could create a call
graph and annotate it over time...
Where this is coming from is in dix/dixutils.c line 719 ("(*(cbr->proc)) (pcbl,
cbr->data, call_data);") at least but to my eyes it looks like it could do
anything and has a whole lot of callers that valgrind caught:
WriteToClient <cycle 19> (Xorg: io.c, ...)
FlushAllOutput (Xorg: io.c, ...)
XaceHook <cycle 19> (Xorg: xace.c, ...)
XaceHookPropertyAccess (Xorg: xace.c, ...)
XaceHookDispatch (Xorg: xace.c, ...)
CloseDownConnection (Xorg: connection.c, ...)
DeleteClientFromAnySelections (Xorg: selection.c, ...)
CloseDownClient (Xorg: dispatch.c, ...)
ProcSetSelectionOwner (Xorg: selection.c, ...)
SendConnSetup (Xorg: dispatch.c, ...)
NextAvailableClient (Xorg: dispatch.c, ...)
But maybe I'm just going in a totally wrong direction...
--
You are receiving this mail because:
You are watching the assignee of the bug.
More information about the dri-devel
mailing list