Trace on HD6320 after resume on v3.7-rc6
Julian Wollrath
jwollrath at web.de
Wed Nov 21 05:24:43 PST 2012
Hello,
while having problems with the resume of the snd-hda-intel driver, I
found the following in my dmesg output:
[ 245.003310] irq 16: nobody cared (try booting with the "irqpoll" option)
[ 245.003318] Pid: 691, comm: kworker/u:3 Tainted: G C 3.7.0-rc6-wl+ #3
[ 245.003321] Call Trace:
[ 245.003342] <IRQ> [<ffffffff810a3c86>] ? __report_bad_irq+0x36/0xd0
[ 245.003350] [<ffffffff810a3f8d>] ? note_interrupt+0x1cd/0x220
[ 245.003359] [<ffffffff810a19ae>] ? handle_irq_event_percpu+0x8e/0x1c0
[ 245.003367] [<ffffffff81040415>] ? __do_softirq+0x105/0x1e0
[ 245.003374] [<ffffffff810a1b16>] ? handle_irq_event+0x36/0x60
[ 245.003383] [<ffffffff810a49ac>] ? handle_fasteoi_irq+0x4c/0xe0
[ 245.003390] [<ffffffff810045c5>] ? handle_irq+0x15/0x20
[ 245.003396] [<ffffffff810042b2>] ? do_IRQ+0x52/0xd0
[ 245.003404] [<ffffffff81405b2a>] ? common_interrupt+0x6a/0x6a
[ 245.003430] <EOI> [<ffffffffa0349380>] ? ttm_bo_move_memcpy+0x3a0/0x4e0 [ttm]
[ 245.003443] [<ffffffffa03492e4>] ? ttm_bo_move_memcpy+0x304/0x4e0 [ttm]
[ 245.003457] [<ffffffffa0346c24>] ? ttm_bo_handle_move_mem+0x234/0x3e0 [ttm]
[ 245.003469] [<ffffffffa0347910>] ? ttm_bo_mem_space+0x170/0x350 [ttm]
[ 245.003503] [<ffffffffa047f200>] ? atom_execute_table_locked+0x70/0x2d0 [radeon]
[ 245.003516] [<ffffffffa0347c3f>] ? ttm_bo_move_buffer+0x14f/0x160 [ttm]
[ 245.003528] [<ffffffffa0347ce5>] ? ttm_bo_validate+0x95/0x120 [ttm]
[ 245.003559] [<ffffffffa0484bbc>] ? radeon_bo_pin_restricted+0x10c/0x1b0 [radeon]
[ 245.003590] [<ffffffffa0485a43>] ? radeon_gart_table_vram_pin+0x33/0xb0 [radeon]
[ 245.003619] [<ffffffffa0481835>] ? atom_execute_table+0x55/0x70 [radeon]
[ 245.003649] [<ffffffffa04c8c86>] ? evergreen_startup+0x386/0x1610 [radeon]
[ 245.003677] [<ffffffffa0481c05>] ? atom_asic_init+0x145/0x1a0 [radeon]
[ 245.003706] [<ffffffffa04ca1d2>] ? evergreen_resume+0x32/0x80 [radeon]
[ 245.003730] [<ffffffffa046e118>] ? radeon_resume_kms+0x68/0x190 [radeon]
[ 245.003737] [<ffffffff81254190>] ? pci_pm_restore+0xe0/0xe0
[ 245.003745] [<ffffffff812e4314>] ? dpm_run_callback.isra.5+0x14/0x40
[ 245.003752] [<ffffffff812e4844>] ? device_resume+0xb4/0x140
[ 245.003759] [<ffffffff812e48e4>] ? async_resume+0x14/0x40
[ 245.003767] [<ffffffff81060666>] ? async_run_entry_fn+0x86/0x1b0
[ 245.003775] [<ffffffff81052f06>] ? process_one_work+0x126/0x490
[ 245.003782] [<ffffffff810605e0>] ? async_unregister_domain+0x70/0x70
[ 245.003788] [<ffffffff81054c3d>] ? worker_thread+0x15d/0x450
[ 245.003795] [<ffffffff81054ae0>] ? flush_delayed_work+0x40/0x40
[ 245.003801] [<ffffffff810599a3>] ? kthread+0xb3/0xc0
[ 245.003807] [<ffffffff810598f0>] ? kthread_create_on_node+0x110/0x110
[ 245.003814] [<ffffffff814061ac>] ? ret_from_fork+0x7c/0xb0
[ 245.003820] [<ffffffff810598f0>] ? kthread_create_on_node+0x110/0x110
[ 245.003823] handlers:
[ 245.003838] [<ffffffffa0267050>] azx_interrupt [snd_hda_intel]
[ 245.003841] Disabling IRQ #16
I was not able to reproduce it but maybe the trace is helpful
nevertheless. I got it during the resume after a suspend to ram.
If you have further questions, please feel free to ask.
With best regards,
Julian Wollrath
More information about the dri-devel
mailing list