PREEMPT_RT vs i915
Mike Galbraith
efault at gmx.de
Fri Jul 11 03:33:55 UTC 2025
On Fri, 2025-07-11 at 04:36 +0200, Mike Galbraith wrote:
> ..forwarding any performance goop emitted, but lockdep runs out of lock
> counting fingers and turns itself off in short order with RT builds.
Hohum, seems block-land called dibs on lockdep anyway.
[ 6.761473] ======================================================
[ 6.761474] WARNING: possible circular locking dependency detected
[ 6.761475] 6.16.0.bc9ff192a-master-rt #46 Tainted: G I
[ 6.761476] ------------------------------------------------------
[ 6.761476] kworker/u16:0/12 is trying to acquire lock:
[ 6.761478] ffffffff825aabd8 (pcpu_alloc_mutex){+.+.}-{4:4}, at: pcpu_alloc_noprof+0x508/0x790
[ 6.761486]
but task is already holding lock:
[ 6.761486] ffff8881238f05c0 (&q->elevator_lock){+.+.}-{4:4}, at: elevator_change+0x5f/0x140
[ 6.761491]
which lock already depends on the new lock.
[ 6.761492]
the existing dependency chain (in reverse order) is:
[ 6.761492]
-> #3 (&q->elevator_lock){+.+.}-{4:4}:
[ 6.761494] __lock_acquire+0x540/0xbb0
[ 6.761496] lock_acquire.part.0+0x94/0x1e0
[ 6.761497] mutex_lock_nested+0x4c/0xa0
[ 6.761500] elevator_change+0x5f/0x140
[ 6.761502] elv_iosched_store+0xe6/0x110
[ 6.761505] kernfs_fop_write_iter+0x14d/0x220
[ 6.761508] vfs_write+0x223/0x580
[ 6.761510] ksys_write+0x5f/0xe0
[ 6.761512] do_syscall_64+0x76/0x3d0
[ 6.761514] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 6.761516]
-> #2 (&q->q_usage_counter(io)){++++}-{0:0}:
[ 6.761517] __lock_acquire+0x540/0xbb0
[ 6.761518] lock_acquire.part.0+0x94/0x1e0
[ 6.761519] blk_alloc_queue+0x34c/0x390
[ 6.761521] blk_mq_alloc_queue+0x52/0xb0
[ 6.761524] __blk_mq_alloc_disk+0x18/0x60
[ 6.761526] loop_add+0x1d5/0x3d0 [loop]
[ 6.761530] stop_this_handle+0xc3/0x140 [jbd2]
[ 6.761539] do_one_initcall+0x4a/0x270
[ 6.761542] do_init_module+0x60/0x220
[ 6.761544] init_module_from_file+0x75/0xa0
[ 6.761546] idempotent_init_module+0xf3/0x2d0
[ 6.761547] __x64_sys_finit_module+0x6d/0xd0
[ 6.761549] do_syscall_64+0x76/0x3d0
[ 6.761550] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 6.761551]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 6.761553] __lock_acquire+0x540/0xbb0
[ 6.761554] lock_acquire.part.0+0x94/0x1e0
[ 6.761555] fs_reclaim_acquire+0x95/0xd0
[ 6.761558] __kmalloc_noprof+0x87/0x300
[ 6.761561] pcpu_create_chunk+0x1a/0x1b0
[ 6.761563] pcpu_alloc_noprof+0x742/0x790
[ 6.761564] bts_init+0x61/0x100
[ 6.761566] do_one_initcall+0x4a/0x270
[ 6.761568] kernel_init_freeable+0x235/0x280
[ 6.761571] kernel_init+0x1a/0x120
[ 6.761572] ret_from_fork+0x213/0x270
[ 6.761574] ret_from_fork_asm+0x11/0x20
[ 6.761576]
-> #0 (pcpu_alloc_mutex){+.+.}-{4:4}:
[ 6.761578] check_prev_add+0xe8/0xca0
[ 6.761581] validate_chain+0x468/0x500
[ 6.761582] __lock_acquire+0x540/0xbb0
[ 6.761583] lock_acquire.part.0+0x94/0x1e0
[ 6.761584] _mutex_lock_killable+0x55/0xc0
[ 6.761586] pcpu_alloc_noprof+0x508/0x790
[ 6.761587] sbitmap_init_node+0xee/0x200
[ 6.761590] sbitmap_queue_init_node+0x28/0x140
[ 6.761592] blk_mq_init_tags+0xa6/0x120
[ 6.761594] blk_mq_alloc_map_and_rqs+0x40/0x110
[ 6.761596] blk_mq_init_sched+0xea/0x1d0
[ 6.761599] elevator_switch+0xb5/0x300
[ 6.761601] elevator_change+0xe2/0x140
[ 6.761603] elevator_set_default+0xb0/0xd0
[ 6.761606] blk_register_queue+0xda/0x1c0
[ 6.761608] __add_disk+0x222/0x380
[ 6.761609] add_disk_fwnode+0x79/0x160
[ 6.761610] sd_probe+0x2f8/0x490 [sd_mod]
[ 6.761613] really_probe+0xd5/0x330
[ 6.761615] __driver_probe_device+0x78/0x110
[ 6.761616] driver_probe_device+0x1f/0xa0
[ 6.761618] __device_attach_driver+0x7a/0x100
[ 6.761619] bus_for_each_drv+0x75/0xb0
[ 6.761622] __device_attach_async_helper+0x83/0xc0
[ 6.761624] async_run_entry_fn+0x2c/0x110
[ 6.761626] process_one_work+0x1e6/0x550
[ 6.761629] worker_thread+0x1ce/0x3c0
[ 6.761632] kthread+0x10c/0x200
[ 6.761634] ret_from_fork+0x213/0x270
[ 6.761636] ret_from_fork_asm+0x11/0x20
[ 6.761638]
other info that might help us debug this:
[ 6.761638] Chain exists of:
pcpu_alloc_mutex --> &q->q_usage_counter(io) --> &q->elevator_lock
[ 6.761640] Possible unsafe locking scenario:
[ 6.761641] CPU0 CPU1
[ 6.761641] ---- ----
[ 6.761641] lock(&q->elevator_lock);
[ 6.761642] lock(&q->q_usage_counter(io));
[ 6.761643] lock(&q->elevator_lock);
[ 6.761644] lock(pcpu_alloc_mutex);
[ 6.761645]
*** DEADLOCK ***
[ 6.761645] 7 locks held by kworker/u16:0/12:
[ 6.761646] #0: ffff888100c64938 ((wq_completion)async){+.+.}-{0:0}, at: process_one_work+0x41a/0x550
[ 6.761651] #1: ffff888100afbe48 ((work_completion)(&entry->work)){+.+.}-{0:0}, at: process_one_work+0x1aa/0x550
[ 6.761655] #2: ffff8881019e53a0 (&dev->mutex){....}-{4:4}, at: __device_attach_async_helper+0x30/0xc0
[ 6.761658] #3: ffff888122acd3d0 (&set->update_nr_hwq_lock){.+.+}-{4:4}, at: add_disk_fwnode+0x68/0x160
[ 6.761661] #4: ffff8881238f0640 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_register_queue+0x92/0x1c0
[ 6.761665] #5: ffff8881238f0120 (&q->q_usage_counter(queue)#65){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave+0x12/0x20
[ 6.761669] #6: ffff8881238f05c0 (&q->elevator_lock){+.+.}-{4:4}, at: elevator_change+0x5f/0x140
[ 6.761673]
stack backtrace:
[ 6.761675] CPU: 1 UID: 0 PID: 12 Comm: kworker/u16:0 Tainted: G I 6.16.0.bc9ff192a-master-rt #46 PREEMPT_{RT,(lazy)}
[ 6.761678] Tainted: [I]=FIRMWARE_WORKAROUND
[ 6.761678] Hardware name: HP HP Spectre x360 Convertible/804F, BIOS F.47 11/22/2017
[ 6.761680] Workqueue: async async_run_entry_fn
[ 6.761683] Call Trace:
[ 6.761684] <TASK>
[ 6.761686] dump_stack_lvl+0x5b/0x80
[ 6.761690] print_circular_bug.cold+0x38/0x45
[ 6.761694] check_noncircular+0x109/0x120
[ 6.761699] check_prev_add+0xe8/0xca0
[ 6.761703] validate_chain+0x468/0x500
[ 6.761706] __lock_acquire+0x540/0xbb0
[ 6.761707] ? find_held_lock+0x2b/0x80
[ 6.761711] lock_acquire.part.0+0x94/0x1e0
[ 6.761712] ? pcpu_alloc_noprof+0x508/0x790
[ 6.761715] ? rcu_is_watching+0x11/0x40
[ 6.761717] ? lock_acquire+0xee/0x130
[ 6.761718] ? pcpu_alloc_noprof+0x508/0x790
[ 6.761720] ? pcpu_alloc_noprof+0x508/0x790
[ 6.761721] _mutex_lock_killable+0x55/0xc0
[ 6.761724] ? pcpu_alloc_noprof+0x508/0x790
[ 6.761726] pcpu_alloc_noprof+0x508/0x790
[ 6.761728] ? lock_is_held_type+0xca/0x120
[ 6.761732] sbitmap_init_node+0xee/0x200
[ 6.761736] sbitmap_queue_init_node+0x28/0x140
[ 6.761739] blk_mq_init_tags+0xa6/0x120
[ 6.761743] blk_mq_alloc_map_and_rqs+0x40/0x110
[ 6.761746] blk_mq_init_sched+0xea/0x1d0
[ 6.761749] elevator_switch+0xb5/0x300
[ 6.761753] elevator_change+0xe2/0x140
[ 6.761756] elevator_set_default+0xb0/0xd0
[ 6.761759] blk_register_queue+0xda/0x1c0
[ 6.761763] __add_disk+0x222/0x380
[ 6.761765] add_disk_fwnode+0x79/0x160
[ 6.761768] sd_probe+0x2f8/0x490 [sd_mod]
[ 6.761771] ? driver_probe_device+0xa0/0xa0
[ 6.761773] really_probe+0xd5/0x330
[ 6.761774] ? pm_runtime_barrier+0x54/0x90
[ 6.761777] __driver_probe_device+0x78/0x110
[ 6.761779] driver_probe_device+0x1f/0xa0
[ 6.761781] __device_attach_driver+0x7a/0x100
[ 6.761783] bus_for_each_drv+0x75/0xb0
[ 6.761787] __device_attach_async_helper+0x83/0xc0
[ 6.761789] async_run_entry_fn+0x2c/0x110
[ 6.761792] process_one_work+0x1e6/0x550
[ 6.761797] worker_thread+0x1ce/0x3c0
[ 6.761800] ? bh_worker+0x250/0x250
[ 6.761803] kthread+0x10c/0x200
[ 6.761811] ? kthreads_online_cpu+0xe0/0xe0
[ 6.761814] ret_from_fork+0x213/0x270
[ 6.761817] ? kthreads_online_cpu+0xe0/0xe0
[ 6.761819] ret_from_fork_asm+0x11/0x20
[ 6.761825] </TASK>
[ 6.799951] sda: sda1 sda2 sda3 sda4 sda5 sda6
[ 6.801399] sd 1:0:0:0: [sda] Attached SCSI disk
More information about the Intel-gfx
mailing list