<html>
<head>
<base href="https://bugs.freedesktop.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - [CI] igt@gem_exec_await@wide-contexts - fail/dmesg-fail - Failed assertion: !"GPU hung""
href="https://bugs.freedesktop.org/show_bug.cgi?id=106680">106680</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>[CI] igt@gem_exec_await@wide-contexts - fail/dmesg-fail - Failed assertion: !"GPU hung"
</td>
</tr>
<tr>
<th>Product</th>
<td>DRI
</td>
</tr>
<tr>
<th>Version</th>
<td>XOrg git
</td>
</tr>
<tr>
<th>Hardware</th>
<td>Other
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>medium
</td>
</tr>
<tr>
<th>Component</th>
<td>DRM/Intel
</td>
</tr>
<tr>
<th>Assignee</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>martin.peres@free.fr
</td>
</tr>
<tr>
<th>QA Contact</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>CC</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr></table>
<p>
<div>
<pre><a href="https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_47/fi-bxt-dsi/igt@gem_exec_await@wide-contexts.html">https://intel-gfx-ci.01.org/tree/drm-tip/drmtip_47/fi-bxt-dsi/igt@gem_exec_await@wide-contexts.html</a>
(gem_exec_await:1739) igt_aux-CRITICAL: Test assertion failure function
sig_abort, file ../lib/igt_aux.c:500:
(gem_exec_await:1739) igt_aux-CRITICAL: Failed assertion: !"GPU hung"
Subtest wide-contexts failed.
<a href="https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_4171/shard-apl6/igt@gem_exec_await@wide-contexts.html">https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_4171/shard-apl6/igt@gem_exec_await@wide-contexts.html</a>
(gem_exec_await:1448) igt_aux-CRITICAL: Test assertion failure function
sig_abort, file ../lib/igt_aux.c:500:
(gem_exec_await:1448) igt_aux-CRITICAL: Failed assertion: !"GPU hung"
Subtest wide-contexts failed.
[ 132.716731] ============================================
[ 132.716737] WARNING: possible recursive locking detected
[ 132.716744] 4.17.0-rc4-CI-CI_DRM_4171+ #1 Tainted: G U
[ 132.716751] --------------------------------------------
[ 132.716758] gem_exec_await/1448 is trying to acquire lock:
[ 132.716766] 0000000022398cdc (&(&timeline->lock)->rlock){-.-.}, at:
i915_gem_reset_engine+0x26c/0x370 [i915]
[ 132.716850]
but task is already holding lock:
[ 132.716857] 000000006f6d9267 (&(&timeline->lock)->rlock){-.-.}, at:
i915_gem_reset_engine+0x260/0x370 [i915]
[ 132.716943]
other info that might help us debug this:
[ 132.716950] Possible unsafe locking scenario:
[ 132.716957] CPU0
[ 132.716961] ----
[ 132.716965] lock(&(&timeline->lock)->rlock);
[ 132.716974] lock(&(&timeline->lock)->rlock);
[ 132.716981]
*** DEADLOCK ***
[ 132.716989] May be due to missing lock nesting notation
[ 132.716998] 4 locks held by gem_exec_await/1448:
[ 132.717004] #0: 00000000aab018e1 (sb_writers#11){.+.+}, at:
vfs_write+0x188/0x1a0
[ 132.717022] #1: 00000000a3763ca8 (&attr->mutex){+.+.}, at:
simple_attr_write+0x36/0xd0
[ 132.717038] #2: 000000009f5491cd (&dev->struct_mutex){+.+.}, at:
i915_drop_caches_set+0x3f/0x1a0 [i915]
[ 132.717125] #3: 000000006f6d9267 (&(&timeline->lock)->rlock){-.-.}, at:
i915_gem_reset_engine+0x260/0x370 [i915]
[ 132.717211]
stack backtrace:
[ 132.717221] CPU: 2 PID: 1448 Comm: gem_exec_await Tainted: G U
4.17.0-rc4-CI-CI_DRM_4171+ #1
[ 132.717231] Hardware name: /NUC6CAYB, BIOS AYAPLCEL.86A.0047.2018.0108.1419
01/08/2018
[ 132.717240] Call Trace:
[ 132.717251] dump_stack+0x67/0x9b
[ 132.717260] __lock_acquire+0xc67/0x1b50
[ 132.717338] ? i915_gem_object_pin_map+0x2d/0x2a0 [i915]
[ 132.717418] ? i915_request_wait+0x130/0x8a0 [i915]
[ 132.717431] ? printk+0x4d/0x69
[ 132.717438] ? lock_acquire+0xa6/0x210
[ 132.717444] lock_acquire+0xa6/0x210
[ 132.717519] ? i915_gem_reset_engine+0x26c/0x370 [i915]
[ 132.717532] _raw_spin_lock+0x2a/0x40
[ 132.717606] ? i915_gem_reset_engine+0x26c/0x370 [i915]
[ 132.717682] i915_gem_reset_engine+0x26c/0x370 [i915]
[ 132.717762] ? i915_request_wait+0x130/0x8a0 [i915]
[ 132.717842] ? i915_request_wait+0x130/0x8a0 [i915]
[ 132.717918] i915_gem_reset+0x5b/0x100 [i915]
[ 132.717986] i915_reset+0x22d/0x290 [i915]
[ 132.718070] __i915_wait_request_check_and_reset.isra.8+0x44/0x50 [i915]
[ 132.718151] i915_request_wait+0x4ea/0x8a0 [i915]
[ 132.718166] ? wake_up_q+0x70/0x70
[ 132.718174] ? wake_up_q+0x70/0x70
[ 132.718247] wait_for_timeline+0x11c/0x2a0 [i915]
[ 132.718324] i915_gem_wait_for_idle+0x8d/0x180 [i915]
[ 132.718398] i915_drop_caches_set+0x18b/0x1a0 [i915]
[ 132.718413] simple_attr_write+0xb0/0xd0
[ 132.718423] full_proxy_write+0x51/0x80
[ 132.718431] __vfs_write+0x31/0x160
[ 132.718442] ? rcu_read_lock_sched_held+0x6f/0x80
[ 132.718450] ? rcu_sync_lockdep_assert+0x29/0x50
[ 132.718457] ? __sb_start_write+0x152/0x1f0
[ 132.718466] ? __sb_start_write+0x168/0x1f0
[ 132.718473] vfs_write+0xbd/0x1a0
[ 132.718480] ksys_write+0x50/0xc0
[ 132.718488] do_syscall_64+0x55/0x190
[ 132.718496] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 132.718504] RIP: 0033:0x7fe11cec0154
[ 132.718511] RSP: 002b:00007ffdf018b488 EFLAGS: 00000246 ORIG_RAX:
0000000000000001
[ 132.718521] RAX: ffffffffffffffda RBX: 0000000000000004 RCX:
00007fe11cec0154
[ 132.718531] RDX: 0000000000000004 RSI: 00005601671326a0 RDI:
0000000000000005
[ 132.718539] RBP: 00005601671326a0 R08: 00007fe11da22980 R09:
0000000000000000
[ 132.718546] R10: 0000000000000000 R11: 0000000000000246 R12:
00005601671173f0
[ 132.718555] R13: 0000000000000004 R14: 00007fe11d1982a0 R15:
00007fe11d197760</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the QA Contact for the bug.</li>
<li>You are the assignee for the bug.</li>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>