[Intel-xe] [PATCH v12 13/13] drm/xe: add lockdep annotation for xe_device_mem_access_get()

Matthew Auld matthew.auld at intel.com
Mon Jun 26 10:50:51 UTC 2023


The atomics here might hide potential issues, so add a dummy lock with
the idea that xe_pm_runtime_resume() is eventually going to be called
when we are holding it. This only needs to happen once and then lockdep
can validate all callers and their locks.

v2: (Thomas Hellström)
 - Prefer static lockdep_map instead of full blown mutex.

Signed-off-by: Matthew Auld <matthew.auld at intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
Acked-by: Matthew Brost <matthew.brost at intel.com>
---
 drivers/gpu/drm/xe/xe_device.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 1dc552da434f..923a23528da9 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -35,6 +35,12 @@
 #include "xe_vm_madvise.h"
 #include "xe_wait_user_fence.h"
 
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map xe_device_mem_access_lockdep_map = {
+	.name = "xe_device_mem_access_lockdep_map"
+};
+#endif
+
 static int xe_file_open(struct drm_device *dev, struct drm_file *file)
 {
 	struct xe_file *xef;
@@ -443,6 +449,22 @@ void xe_device_mem_access_get(struct xe_device *xe)
 	if (xe_pm_read_callback_task(xe) == current)
 		return;
 
+	/*
+	 * Since the resume here is synchronous it can be quite easy to deadlock
+	 * if we are not careful. Also in practice it might be quite timing
+	 * sensitive to ever see the 0 -> 1 transition with the callers locks
+	 * held, so deadlocks might exist but are hard for lockdep to ever see.
+	 * With this in mind, help lockdep learn about the potentially scary
+	 * stuff that can happen inside the runtime_resume callback by acquiring
+	 * a dummy lock (it doesn't protect anything and gets compiled out on
+	 * non-debug builds).  Lockdep then only needs to see the
+	 * mem_access.lock -> runtime_resume callback once, and then can
+	 * hopefully validate all the (callers_locks) -> mem_access.lock. For
+	 * example if the (callers_locks) are ever grabbed in the runtime_resume
+	 * callback, lockdep should give us a nice splat.
+	 */
+	lock_map_acquire(&xe_device_mem_access_lockdep_map);
+
 	if (!atomic_inc_not_zero(&xe->mem_access.ref)) {
 		bool hold_rpm = xe_pm_runtime_resume_and_get(xe);
 		int ref;
@@ -455,6 +477,8 @@ void xe_device_mem_access_get(struct xe_device *xe)
 	} else {
 		XE_WARN_ON(atomic_read(&xe->mem_access.ref) == S32_MAX);
 	}
+
+	lock_map_release(&xe_device_mem_access_lockdep_map);
 }
 
 void xe_device_mem_access_put(struct xe_device *xe)
-- 
2.41.0



More information about the Intel-xe mailing list