[next] [dragonboard 410c] Unable to handle kernel paging request at virtual address 00000000007c4240
Vlastimil Babka
vbabka at suse.cz
Thu Oct 21 17:51:20 UTC 2021
On 10/21/21 10:40, Jani Nikula wrote:
> On Thu, 21 Oct 2021, Vlastimil Babka <vbabka at suse.cz> wrote:
>> This one seems a bit more tricky and I could really use some advice.
>> cd06ab2fd48f adds stackdepot usage to drm_modeset_lock which itself has a
>> number of different users and requiring those to call stack_depot_init()
>> would be likely error prone. Would it be ok to add the call of
>> stack_depot_init() (guarded by #ifdef CONFIG_DRM_DEBUG_MODESET_LOCK) to
>> drm_modeset_lock_init()? It will do a mutex_lock()/unlock(), and kvmalloc()
>> on first call.
>> I don't know how much of hotpath this is, but hopefully should be acceptable
>> in debug config. Or do you have better suggestion? Thanks.
>
> I think that should be fine.
>
> Maybe add __drm_stack_depot_init() in the existing #if
> IS_ENABLED(CONFIG_DRM_DEBUG_MODESET_LOCK), similar to the other
> __drm_stack_depot_*() functions, with an empty stub for
> CONFIG_DRM_DEBUG_MODESET_LOCK=n, and call it unconditionally in
> drm_modeset_lock_init().
Good idea.
>> Then we have to figure out how to order a fix between DRM and mmotm...
>
> That is the question! The problem exists only in the merge of the
> two. On current DRM side stack_depot_init() exists but it's __init and
> does not look safe to call multiple times. And obviously my changes
> don't exist at all in mmotm.
>
> I guess one (admittedly hackish) option is to first add a patch in
> drm-next (or drm-misc-next) that makes it safe to call
> stack_depot_init() multiple times in non-init context. It would be
> dropped in favour of your changes once the trees get merged together.
>
> Or is there some way for __drm_stack_depot_init() to detect whether it
> should call stack_depot_init() or not, i.e. whether your changes are
> there or not?
Let's try the easiest approach first. AFAIK mmotm series is now split to
pre-next and post-next part and moving my patch
lib-stackdepot-allow-optional-init-and-stack_table-allocation-by-kvmalloc.patch
with the following fixup to the post-next part should solve this. Would that
work, Andrew? Thanks.
----8<----
>From 719e91df5571034b62fa992f6738b00f8d29020e Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka at suse.cz>
Date: Thu, 21 Oct 2021 19:43:33 +0200
Subject: [PATCH] lib/stackdepot: allow optional init and stack_table
allocation by kvmalloc() - fixup3
Due to cd06ab2fd48f ("drm/locking: add backtrace for locking contended locks
without backoff") landing recently to -next adding a new stack depot user in
drivers/gpu/drm/drm_modeset_lock.c we need to add an appropriate call to
stack_depot_init() there as well.
Signed-off-by: Vlastimil Babka <vbabka at suse.cz>
---
drivers/gpu/drm/drm_modeset_lock.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/drm_modeset_lock.c b/drivers/gpu/drm/drm_modeset_lock.c
index c97323365675..918065982db4 100644
--- a/drivers/gpu/drm/drm_modeset_lock.c
+++ b/drivers/gpu/drm/drm_modeset_lock.c
@@ -107,6 +107,11 @@ static void __drm_stack_depot_print(depot_stack_handle_t stack_depot)
kfree(buf);
}
+
+static void __drm_stack_depot_init(void)
+{
+ stack_depot_init();
+}
#else /* CONFIG_DRM_DEBUG_MODESET_LOCK */
static depot_stack_handle_t __drm_stack_depot_save(void)
{
@@ -115,6 +120,9 @@ static depot_stack_handle_t __drm_stack_depot_save(void)
static void __drm_stack_depot_print(depot_stack_handle_t stack_depot)
{
}
+static void __drm_stack_depot_init(void)
+{
+}
#endif /* CONFIG_DRM_DEBUG_MODESET_LOCK */
/**
@@ -359,6 +367,7 @@ void drm_modeset_lock_init(struct drm_modeset_lock *lock)
{
ww_mutex_init(&lock->mutex, &crtc_ww_class);
INIT_LIST_HEAD(&lock->head);
+ __drm_stack_depot_init();
}
EXPORT_SYMBOL(drm_modeset_lock_init);
--
2.33.0
More information about the dri-devel
mailing list