✗ CI.checkpatch: warning for locking/ww_mutex: Adjust to lockdep nest_lock requirements (rev6)
Patchwork
patchwork at emeril.freedesktop.org
Thu Oct 17 15:52:50 UTC 2024
== Series Details ==
Series: locking/ww_mutex: Adjust to lockdep nest_lock requirements (rev6)
URL : https://patchwork.freedesktop.org/series/123522/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit c9dc63be50201d035d201ed30f8aebc3ecec0be9
Author: Thomas Hellström <thomas.hellstrom at linux.intel.com>
Date: Thu Oct 17 17:10:07 2024 +0200
locking/ww_mutex: Adjust to lockdep nest_lock requirements
When using mutex_acquire_nest() with a nest_lock, lockdep refcounts the
number of acquired lockdep_maps of mutexes of the same class, and also
keeps a pointer to the first acquired lockdep_map of a class. That pointer
is then used for various comparison-, printing- and checking purposes,
but there is no mechanism to actively ensure that lockdep_map stays in
memory. Instead, a warning is printed if the lockdep_map is freed and
there are still held locks of the same lock class, even if the lockdep_map
itself has been released.
In the context of WW/WD transactions that means that if a user unlocks
and frees a ww_mutex from within an ongoing ww transaction, and that
mutex happens to be the first ww_mutex grabbed in the transaction,
such a warning is printed and there might be a risk of a UAF.
Note that this is only problem when lockdep is enabled and affects only
dereferences of struct lockdep_map.
Adjust to this by adding a fake lockdep_map to the acquired context and
make sure it is the first acquired lockdep map of the associated
ww_mutex class. Then hold it for the duration of the WW/WD transaction.
This has the side effect that trying to lock a ww mutex *without* a
ww_acquire_context but where a such context has been acquire, we'd see
a lockdep splat. The test-ww_mutex.c selftest attempts to do that, so
modify that particular test to not acquire a ww_acquire_context if it
is not going to be used.
v2:
- Lower the number of locks in the test-ww_mutex
stress(STRESS_ALL) test to accommodate the dummy lock
introduced in this patch without overflowing lockdep held lock
references.
v3:
- Adjust the ww_test_normal locking-api selftest to avoid
recursive locking (Boqun Feng)
- Initialize the dummy lock map with LD_WAIT_SLEEP to agree with
how the corresponding ww_mutex lockmaps are initialized
(Boqun Feng)
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: Will Deacon <will at kernel.org>
Cc: Waiman Long <longman at redhat.com>
Cc: Boqun Feng <boqun.feng at gmail.com>
Cc: Maarten Lankhorst <maarten at lankhorst.se>
Cc: Christian König <christian.koenig at amd.com>
Cc: dri-devel at lists.freedesktop.org
Cc: linux-kernel at vger.kernel.org
Signed-off-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>
Acked-by: maarten.lankhorst at linux.intel.com #v1
+ /mt/dim checkpatch 6d5c65cb1afb42b0bbe813847d9271271d2875a2 drm-intel
c9dc63be5020 locking/ww_mutex: Adjust to lockdep nest_lock requirements
-:130: CHECK:SPACING: spaces preferred around that '*' (ctx:VxV)
#130: FILE: kernel/locking/test-ww_mutex.c:684:
+ ret = stress(2046, hweight32(STRESS_ALL)*ncpus, STRESS_ALL);
^
total: 0 errors, 0 warnings, 1 checks, 75 lines checked
More information about the Intel-xe
mailing list