[Bug 111911] New: [CI][SHARDS] igt at gem_mmap_gtt@hang -dmesg-warn - WARNING: possible circular locking dependency detected

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Mon Oct 7 07:48:41 UTC 2019


https://bugs.freedesktop.org/show_bug.cgi?id=111911

            Bug ID: 111911
           Summary: [CI][SHARDS] igt at gem_mmap_gtt@hang -dmesg-warn -
                    WARNING: possible circular locking dependency detected
           Product: DRI
           Version: DRI git
          Hardware: Other
                OS: All
            Status: NEW
          Severity: not set
          Priority: not set
         Component: DRM/Intel
          Assignee: intel-gfx-bugs at lists.freedesktop.org
          Reporter: lakshminarayana.vudum at intel.com
        QA Contact: intel-gfx-bugs at lists.freedesktop.org
                CC: intel-gfx-bugs at lists.freedesktop.org

https://intel-gfx-ci.01.org/tree/drm-tip/IGT_5211/shard-glk6/igt@gem_mmap_gtt@hang.html

<4> [990.044024] ======================================================
<4> [990.044028] WARNING: possible circular locking dependency detected
<4> [990.044033] 5.4.0-rc1-CI-CI_DRM_6999+ #1 Tainted: G     U           
<4> [990.044036] ------------------------------------------------------
<4> [990.044040] gem_mmap_gtt/2956 is trying to acquire lock:
<4> [990.044043] ffff88826344b438 (&mapping->i_mmap_rwsem){++++}, at:
unmap_mapping_pages+0x48/0x130
<4> [990.044057] 
but task is already holding lock:
<4> [990.044060] ffff88825af5c5b0 (&gt->reset.mutex){+.+.}, at:
intel_gt_reset+0x5f/0x3c0 [i915]
<4> [990.044152] 
which lock already depends on the new lock.

<4> [990.044156] 
the existing dependency chain (in reverse order) is:
<4> [990.044160] 
-> #3 (&gt->reset.mutex){+.+.}:
<4> [990.044231]        i915_request_wait+0xf8/0x870 [i915]
<4> [990.044297]        i915_active_wait+0x13c/0x280 [i915]
<4> [990.044365]        i915_vma_unbind+0x1c5/0x4a0 [i915]
<4> [990.044429]        eb_lookup_vmas+0x4ea/0x11a0 [i915]
<4> [990.044493]        i915_gem_do_execbuffer+0x601/0x2360 [i915]
<4> [990.044556]        i915_gem_execbuffer2_ioctl+0x11b/0x460 [i915]
<4> [990.044563]        drm_ioctl_kernel+0xa7/0xf0
<4> [990.044567]        drm_ioctl+0x2e1/0x390
<4> [990.044571]        do_vfs_ioctl+0xa0/0x6f0
<4> [990.044575]        ksys_ioctl+0x35/0x60
<4> [990.044579]        __x64_sys_ioctl+0x11/0x20
<4> [990.044584]        do_syscall_64+0x4f/0x210
<4> [990.044590]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [990.044593] 
-> #2 (i915_active){+.+.}:
<4> [990.044660]        i915_active_wait+0x4a/0x280 [i915]
<4> [990.044726]        i915_vma_unbind+0x1c5/0x4a0 [i915]
<4> [990.044792]        i915_gem_object_unbind+0x153/0x1c0 [i915]
<4> [990.044857]        userptr_mn_invalidate_range_start+0x9f/0x200 [i915]
<4> [990.044863]        __mmu_notifier_invalidate_range_start+0xa3/0x180
<4> [990.044868]        unmap_vmas+0x143/0x150
<4> [990.044872]        unmap_region+0xa3/0x100
<4> [990.044875]        __do_munmap+0x25d/0x490
<4> [990.044879]        __vm_munmap+0x6e/0xc0
<4> [990.044883]        __x64_sys_munmap+0x12/0x20
<4> [990.044886]        do_syscall_64+0x4f/0x210
<4> [990.044891]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [990.044894] 
-> #1 (mmu_notifier_invalidate_range_start){+.+.}:
<4> [990.044901]        page_mkclean_one+0xda/0x210
<4> [990.044905]        rmap_walk_file+0xff/0x260
<4> [990.044909]        page_mkclean+0x9f/0xb0
<4> [990.044913]        clear_page_dirty_for_io+0xa2/0x300
<4> [990.044919]        mpage_submit_page+0x1a/0x70
<4> [990.044923]        mpage_process_page_bufs+0xe7/0x110
<4> [990.044927]        mpage_prepare_extent_to_map+0x1d2/0x2b0
<4> [990.044931]        ext4_writepages+0x592/0x1230
<4> [990.044935]        do_writepages+0x46/0xe0
<4> [990.044939]        __filemap_fdatawrite_range+0xc6/0x100
<4> [990.044944]        file_write_and_wait_range+0x3c/0x90
<4> [990.044948]        ext4_sync_file+0x154/0x500
<4> [990.044952]        do_fsync+0x33/0x60
<4> [990.044956]        __x64_sys_fsync+0xb/0x10
<4> [990.044960]        do_syscall_64+0x4f/0x210
<4> [990.044964]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [990.044967] 
-> #0 (&mapping->i_mmap_rwsem){++++}:
<4> [990.044975]        __lock_acquire+0x1328/0x15d0
<4> [990.044979]        lock_acquire+0xa7/0x1c0
<4> [990.044983]        down_write+0x33/0x70
<4> [990.044987]        unmap_mapping_pages+0x48/0x130
<4> [990.045048]        intel_gt_reset+0x142/0x3c0 [i915]
<4> [990.045110]        intel_gt_reset_global+0xe0/0x150 [i915]
<4> [990.045171]        intel_gt_handle_error+0x184/0x3b0 [i915]
<4> [990.045230]        i915_wedged_set+0x5b/0xc0 [i915]
<4> [990.045236]        simple_attr_write+0xb0/0xd0
<4> [990.045241]        full_proxy_write+0x51/0x80
<4> [990.045245]        vfs_write+0xb9/0x1d0
<4> [990.045249]        ksys_write+0x9f/0xe0
<4> [990.045252]        do_syscall_64+0x4f/0x210
<4> [990.045256]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [990.045260] 
other info that might help us debug this:

<4> [990.045265] Chain exists of:
  &mapping->i_mmap_rwsem --> i915_active --> &gt->reset.mutex

<4> [990.045272]  Possible unsafe locking scenario:

<4> [990.045276]        CPU0                    CPU1
<4> [990.045279]        ----                    ----
<4> [990.045282]   lock(&gt->reset.mutex);
<4> [990.045285]                                lock(i915_active);
<4> [990.045289]                                lock(&gt->reset.mutex);
<4> [990.045293]   lock(&mapping->i_mmap_rwsem);
<4> [990.045296] 
 *** DEADLOCK ***

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are the QA Contact for the bug.
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-gfx-bugs/attachments/20191007/1b0fb9d3/attachment.html>


More information about the intel-gfx-bugs mailing list