[Intel-gfx] [PATCH 3/4] mm, notifier: Catch sleeping/blocking for !blockable

Daniel Vetter daniel.vetter at ffwll.ch
Mon Dec 10 10:36:40 UTC 2018


We need to make sure implementations don't cheat and don't have a
possible schedule/blocking point deeply burried where review can't
catch it.

I'm not sure whether this is the best way to make sure all the
might_sleep() callsites trigger, and it's a bit ugly in the code flow.
But it gets the job done.

Inspired by an i915 patch series which did exactly that, because the
rules haven't been entirely clear to us.

v2: Use the shiny new non_block_start/end annotations instead of
abusing preempt_disable/enable.

Cc: Andrew Morton <akpm at linux-foundation.org>
Cc: Michal Hocko <mhocko at suse.com>
Cc: David Rientjes <rientjes at google.com>
Cc: "Christian König" <christian.koenig at amd.com>
Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
Cc: "Jérôme Glisse" <jglisse at redhat.com>
Cc: linux-mm at kvack.org
Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
---
 mm/mmu_notifier.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index ccc22f21b735..a50ed7d1ecef 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -185,7 +185,13 @@ int __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
 	id = srcu_read_lock(&srcu);
 	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
 		if (mn->ops->invalidate_range_start) {
-			int _ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable);
+			int _ret;
+
+			if (!blockable)
+				non_block_start();
+			_ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable);
+			if (!blockable)
+				non_block_end();
 			if (_ret) {
 				pr_info("%pS callback failed with %d in %sblockable context.\n",
 						mn->ops->invalidate_range_start, _ret,
-- 
2.20.0.rc1



More information about the Intel-gfx mailing list