<html>
<head>
<base href="https://bugs.freedesktop.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - [CI][SHARDS] igt@kms_flip@2x-flip-vs-panning - dmesg-warn - WARNING: possible circular locking dependency detected"
href="https://bugs.freedesktop.org/show_bug.cgi?id=111892">111892</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>[CI][SHARDS] igt@kms_flip@2x-flip-vs-panning - dmesg-warn - WARNING: possible circular locking dependency detected
</td>
</tr>
<tr>
<th>Product</th>
<td>DRI
</td>
</tr>
<tr>
<th>Version</th>
<td>DRI git
</td>
</tr>
<tr>
<th>Hardware</th>
<td>Other
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>not set
</td>
</tr>
<tr>
<th>Priority</th>
<td>not set
</td>
</tr>
<tr>
<th>Component</th>
<td>DRM/Intel
</td>
</tr>
<tr>
<th>Assignee</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>lakshminarayana.vudum@intel.com
</td>
</tr>
<tr>
<th>QA Contact</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>CC</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr></table>
<p>
<div>
<pre><a href="https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6989/shard-hsw5/igt@kms_flip@2x-flip-vs-panning.html">https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6989/shard-hsw5/igt@kms_flip@2x-flip-vs-panning.html</a>
<4> [25.732321] ======================================================
<4> [25.732321] WARNING: possible circular locking dependency detected
<4> [25.732323] 5.4.0-rc1-CI-CI_DRM_6989+ #1 Tainted: G U
<4> [25.732323] ------------------------------------------------------
<4> [25.732324] kms_flip/1036 is trying to acquire lock:
<4> [25.732325] ffff8883eb92cce8 (
<4> [25.732327] hardirqs last disabled at (1064412): [<ffffffff8123f4b9>]
__slab_alloc.isra.84.constprop.89+0x19/0x70
<4> [25.732328] &mapping->i_mmap_rwsem){++++}, at:
unmap_mapping_pages+0x48/0x130
<4> [25.732332]
but task is already holding lock:
<4> [25.732333] ffff8883f99093a0 (&vm->mutex){+.+.}, at:
i915_vma_unbind+0xe6/0x4a0 [i915]
<4> [25.732366] softirqs last enabled at (1063898): [<ffffffff81c00385>]
__do_softirq+0x385/0x47f
<4> [25.732367] softirqs last disabled at (1063889): [<ffffffff810b7f4a>]
irq_exit+0xba/0xc0
<4> [25.732369]
which lock already depends on the new lock.
<4> [25.732370]
the existing dependency chain (in reverse order) is:
<4> [25.732370]
-> #2 (&vm->mutex){+.+.}:
<4> [25.732373] __mutex_lock+0x9a/0x9d0
<4> [25.732402] i915_vma_remove+0x53/0x250 [i915]
<4> [25.732431] i915_vma_unbind+0x19c/0x4a0 [i915]
<4> [25.732485] i915_gem_object_unbind+0x153/0x1c0 [i915]
<4> [25.732513] userptr_mn_invalidate_range_start+0x9f/0x200 [i915]
<4> [25.732515] __mmu_notifier_invalidate_range_start+0xa3/0x180
<4> [25.732516] unmap_vmas+0x143/0x150
<4> [25.732517] unmap_region+0xa3/0x100
<4> [25.732518] __do_munmap+0x25d/0x490
<4> [25.732519] __vm_munmap+0x6e/0xc0
<4> [25.732520] __x64_sys_munmap+0x12/0x20
<4> [25.732521] do_syscall_64+0x4f/0x210
<4> [25.732523] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [25.732524]
-> #1 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [25.732526] page_mkclean_one+0xda/0x210
<4> [25.732527] rmap_walk_file+0xff/0x260
<4> [25.732528] page_mkclean+0x9f/0xb0
<4> [25.732530] clear_page_dirty_for_io+0xa2/0x300
<4> [25.732532] mpage_submit_page+0x1a/0x70
<4> [25.732533] mpage_process_page_bufs+0xe7/0x110
<4> [25.732534] mpage_prepare_extent_to_map+0x1d2/0x2b0
<4> [25.732536] ext4_writepages+0x592/0x1230
<4> [25.732536] do_writepages+0x46/0xe0
<4> [25.732538] __filemap_fdatawrite_range+0xc6/0x100
<4> [25.732548] file_write_and_wait_range+0x3c/0x90
<4> [25.732549] ext4_sync_file+0x154/0x500
<4> [25.732551] do_fsync+0x33/0x60
<4> [25.732553] __x64_sys_fsync+0xb/0x10
<4> [25.732553] do_syscall_64+0x4f/0x210
<4> [25.732555] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [25.732555]
-> #0 (&mapping->i_mmap_rwsem){++++}:
<4> [25.732558] __lock_acquire+0x1328/0x15d0
<4> [25.732559] lock_acquire+0xa7/0x1c0
<4> [25.732560] down_write+0x33/0x70
<4> [25.732561] unmap_mapping_pages+0x48/0x130
<4> [25.732595] i915_vma_revoke_mmap+0x81/0x1b0 [i915]
<4> [25.732637] i915_vma_unbind+0xee/0x4a0 [i915]
<4> [25.732674] i915_gem_object_ggtt_pin+0xee/0x430 [i915]
<4> [25.732699] i915_gem_object_pin_to_display_plane+0xd1/0x130 [i915]
<4> [25.732728] intel_pin_and_fence_fb_obj+0xb3/0x230 [i915]
<4> [25.732757] intel_plane_pin_fb+0x3c/0xd0 [i915]
<4> [25.732786] intel_prepare_plane_fb+0x144/0x5d0 [i915]
<4> [25.732788] drm_atomic_helper_prepare_planes+0x85/0x110
<4> [25.732816] intel_atomic_commit+0xc6/0x2f0 [i915]
<4> [25.732818] drm_atomic_helper_set_config+0x61/0x90
<4> [25.732818] drm_mode_setcrtc+0x18e/0x720
<4> [25.732820] drm_ioctl_kernel+0xa7/0xf0
<4> [25.732821] drm_ioctl+0x2e1/0x390
<4> [25.732823] do_vfs_ioctl+0xa0/0x6f0
<4> [25.732824] ksys_ioctl+0x35/0x60
<4> [25.732825] __x64_sys_ioctl+0x11/0x20
<4> [25.732826] do_syscall_64+0x4f/0x210
<4> [25.732827] entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [25.732828]
other info that might help us debug this:
<4> [25.732828] Chain exists of:
&mapping->i_mmap_rwsem --> mmu_notifier_invalidate_range_start --> &vm->mutex
<4> [25.732830] Possible unsafe locking scenario:
<4> [25.732830] CPU0 CPU1
<4> [25.732831] ---- ----
<4> [25.732831] lock(&vm->mutex);
<4> [25.732832]
lock(mmu_notifier_invalidate_range_start);
<4> [25.732833] lock(&vm->mutex);
<4> [25.732833] lock(&mapping->i_mmap_rwsem);
<4> [25.732834]
*** DEADLOCK ***</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the QA Contact for the bug.</li>
<li>You are the assignee for the bug.</li>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>