drm prime locking recursion
Dave Airlie
airlied at gmail.com
Tue Oct 13 20:08:33 PDT 2015
Got this playing with virgl, it happens when gem_object_open driver
callback fails.
Now this probably shouldn't be failing that often, but when it does
deadlock seems wrong.
Dave.
happens if the driver fails
[ 677.932957] =============================================
[ 677.932957] [ INFO: possible recursive locking detected ]
[ 677.932957] 4.3.0-rc5-virtio-gpu+ #11 Not tainted
[ 677.932957] ---------------------------------------------
[ 677.932957] Xorg/2661 is trying to acquire lock:
[ 677.932957] (&prime_fpriv->lock){+.+.+.}, at: [<ffffffffa00151b0>]
drm_gem_remove_prime_handles.isra.7+0x20/0x50 [drm]
[ 677.932957]
but task is already holding lock:
[ 677.932957] (&prime_fpriv->lock){+.+.+.}, at: [<ffffffffa002d68b>]
drm_gem_prime_fd_to_handle+0x4b/0x240 [drm]
[ 677.932957]
other info that might help us debug this:
[ 677.932957] Possible unsafe locking scenario:
[ 677.932957] CPU0
[ 677.932957] ----
[ 677.932957] lock(&prime_fpriv->lock);
[ 677.932957] lock(&prime_fpriv->lock);
[ 677.932957]
*** DEADLOCK ***
[ 677.932957] May be due to missing lock nesting notation
[ 677.932957] 1 lock held by Xorg/2661:
[ 677.932957] #0: (&prime_fpriv->lock){+.+.+.}, at:
[<ffffffffa002d68b>] drm_gem_prime_fd_to_handle+0x4b/0x240 [drm]
[ 677.932957]
stack backtrace:
[ 677.932957] CPU: 1 PID: 2661 Comm: Xorg Not tainted 4.3.0-rc5-virtio-gpu+ #11
[ 677.932957] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 677.932957] ffffffff82619740 ffff88007b22fb20 ffffffff813159f9
ffffffff82619740
[ 677.932957] ffff88007b22fbd8 ffffffff810d373d 0000000000000000
ffff88007b22fb58
[ 677.932957] ffffffff00000000 ffff88004cfd8700 00000000008e8474
0000000000048af0
[ 677.932957] Call Trace:
[ 677.932957] [<ffffffff813159f9>] dump_stack+0x4b/0x72
[ 677.932957] [<ffffffff810d373d>] __lock_acquire+0x193d/0x1a60
[ 677.932957] [<ffffffff810d411d>] lock_acquire+0x6d/0x90
[ 677.932957] [<ffffffffa00151b0>] ?
drm_gem_remove_prime_handles.isra.7+0x20/0x50 [drm]
[ 677.932957] [<ffffffff816d30e4>] mutex_lock_nested+0x64/0x3a0
[ 677.932957] [<ffffffffa00151b0>] ?
drm_gem_remove_prime_handles.isra.7+0x20/0x50 [drm]
[ 677.932957] [<ffffffffa00151b0>]
drm_gem_remove_prime_handles.isra.7+0x20/0x50 [drm]
[ 677.932957] [<ffffffffa0015947>] drm_gem_handle_delete+0xd7/0x110 [drm]
[ 677.932957] [<ffffffffa0015baf>] drm_gem_handle_create_tail+0xff/0x160 [drm]
[ 677.932957] [<ffffffffa002d731>] drm_gem_prime_fd_to_handle+0xf1/0x240 [drm]
[ 677.932957] [<ffffffffa002dd28>]
drm_prime_fd_to_handle_ioctl+0x28/0x40 [drm]
[ 677.932957] [<ffffffffa0016594>] drm_ioctl+0x124/0x4f0 [drm]
[ 677.932957] [<ffffffffa002dd00>] ?
drm_prime_handle_to_fd_ioctl+0x60/0x60 [drm]
[ 677.932957] [<ffffffff812b40e7>] ? ioctl_has_perm+0xa7/0xc0
[ 677.932957] [<ffffffff811c30aa>] do_vfs_ioctl+0x2da/0x530
[ 677.932957] [<ffffffff812b4159>] ? selinux_file_ioctl+0x59/0xf0
[ 677.932957] [<ffffffff812a68ae>] ? security_file_ioctl+0x3e/0x60
[ 677.932957] [<ffffffff811c3374>] SyS_ioctl+0x74/0x80
[ 677.932957] [<ffffffff816d6632>] entry_SYSCALL_64_fastpath+0x12/0x76
[ 840.028068] INFO: task Xorg:2661 blocked for more than 120 seconds.
More information about the dri-devel
mailing list