[2.6.39-rc7] i915: kworker busily spinning...

Daniel J Blueman daniel.blueman at gmail.com
Tue May 17 04:27:29 PDT 2011


With 2.6.39-rc7 on my Sandy Bridge laptop GPU (8086:0126 rev 9),
sometimes I find one of the kworker threads busily running with 15-20%
system time for some minutes, causing terrible interactivity latency.
I've seen it occur when plugging eg a HDMI display, and also when no
display has been plugged (ie only the internal LVDS connection is
active).

Across multiple kernel task captures, I see the kernel thread
consistently reading one of the connector's EDID data [1]; I guess
either it's having a hard time reading from a disconnected connector
and retrying, or is incorrectly detecting a change when there is none.

I'll enable DRM debugging to see what connectors it believes it needs
to read from. Anything else that would be handy to capture, or any
thoughts?

Also, the 100ms connector change polling seems overkill, particularly
when power consumption is important; 1000-2000ms would be sufficient,
do you think?

Thanks,
  Daniel

--- [1]

kworker/2:2     R  running task     5048    86      2 0x00000000
 0000000000000002 ffff88021e804040 ffff88021e85f9b0 ffff88021e804040
 ffff88021e85e000 0000000000004000 ffff8802210a4040 ffff88021e804040
 0000000000000046 ffffffff81c18b20 ffff88022106c000 ffffffff8270b740
Call Trace:
 [<ffffffff8109a460>] ? mark_held_locks+0x70/0xa0
 [<ffffffff81059261>] ? get_parent_ip+0x11/0x50
 [<ffffffff8105933d>] ? sub_preempt_count+0x9d/0xd0
 [<ffffffff81705a35>] schedule_timeout+0x175/0x250
 [<ffffffff8106ec10>] ? run_timer_softirq+0x2a0/0x2a0
 [<ffffffff81705b29>] schedule_timeout_uninterruptible+0x19/0x20
 [<ffffffff8106f878>] msleep+0x18/0x20
 [<ffffffffa017c620>] gmbus_xfer+0x400/0x620 [i915]
 [<ffffffff8150c892>] i2c_transfer+0xa2/0xf0
 [<ffffffffa002bc96>] drm_do_probe_ddc_edid+0x66/0xa0 [drm]
 [<ffffffffa002c0f9>] drm_get_edid+0x29/0x60 [drm]
 [<ffffffffa0176f86>] intel_hdmi_detect+0x56/0xe0 [i915]
 [<ffffffffa00d1177>] output_poll_execute+0xd7/0x1a0 [drm_kms_helper]
 [<ffffffff81078e14>] process_one_work+0x1a4/0x450
 [<ffffffff81078db6>] ? process_one_work+0x146/0x450
 [<ffffffffa00d10a0>] ?
drm_helper_disable_unused_functions+0x150/0x150 [drm_kms_helper]
 [<ffffffff810790ec>] process_scheduled_works+0x2c/0x40
 [<ffffffff8107c384>] worker_thread+0x284/0x350
 [<ffffffff8107c100>] ? manage_workers.clone.23+0x120/0x120
 [<ffffffff81080ea6>] kthread+0xb6/0xc0
 [<ffffffff8109a5cd>] ? trace_hardirqs_on_caller+0x13d/0x180
 [<ffffffff8170a494>] kernel_thread_helper+0x4/0x10
 [<ffffffff8104c64f>] ? finish_task_switch+0x6f/0x100
 [<ffffffff81708bc4>] ? retint_restore_args+0xe/0xe
 [<ffffffff81080df0>] ? __init_kthread_worker+0x70/0x70
 [<ffffffff8170a490>] ? gs_change+0xb/0xb
-- 
Daniel J Blueman


More information about the dri-devel mailing list