... instead of open codding it. Completely equivalent code, just a notch more meaningful when reading.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org --- mm/page_alloc.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2db95780e003..277774d170cb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, *alloc_flags |= ALLOC_CPUSET; }
- fs_reclaim_acquire(gfp_mask); - fs_reclaim_release(gfp_mask); - - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM); + might_alloc(gfp_mask);
if (should_fail_alloc_page(gfp_mask, order)) return false;
It only does a might_sleep_if(GFP_RECLAIM) check, which is already covered by the might_alloc() in slab_pre_alloc_hook(). And all callers of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Roman Gushchin roman.gushchin@linux.dev Cc: linux-mm@kvack.org --- mm/slab.c | 10 ---------- 1 file changed, 10 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c index b04e40078bdf..75779ac5f5ba 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) return ac->entry[--ac->avail]; }
-static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep, - gfp_t flags) -{ - might_sleep_if(gfpflags_allow_blocking(flags)); -} - #if DEBUG static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep, gfp_t flags, void *objp, unsigned long caller) @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ if (unlikely(ptr)) goto out_hooks;
- cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags);
if (nodeid == NUMA_NO_NODE) @@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, if (unlikely(objp)) goto out;
- cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); objp = __do_cache_alloc(cachep, flags); local_irq_restore(save_flags); @@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, if (!s) return 0;
- cache_alloc_debugcheck_before(s, flags); - local_irq_disable(); for (i = 0; i < size; i++) { void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
On 05.06.22 17:25, Daniel Vetter wrote:
It only does a might_sleep_if(GFP_RECLAIM) check, which is already covered by the might_alloc() in slab_pre_alloc_hook(). And all callers of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Roman Gushchin roman.gushchin@linux.dev Cc: linux-mm@kvack.org
LGTM
Reviewed-by: David Hildenbrand david@redhat.com
On Sun, 5 Jun 2022, Daniel Vetter wrote:
It only does a might_sleep_if(GFP_RECLAIM) check, which is already covered by the might_alloc() in slab_pre_alloc_hook(). And all callers of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Roman Gushchin roman.gushchin@linux.dev Cc: linux-mm@kvack.org
Acked-by: David Rientjes rientjes@google.com
On Sun, Jun 05, 2022 at 05:25:38PM +0200, Daniel Vetter wrote:
It only does a might_sleep_if(GFP_RECLAIM) check, which is already covered by the might_alloc() in slab_pre_alloc_hook(). And all callers of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com
Nice cleanup.
Reviewed-by: Muchun Song songmuchun@bytedance.com
Thanks.
On 6/5/22 17:25, Daniel Vetter wrote:
It only does a might_sleep_if(GFP_RECLAIM) check, which is already covered by the might_alloc() in slab_pre_alloc_hook(). And all callers of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Roman Gushchin roman.gushchin@linux.dev Cc: linux-mm@kvack.org
Thanks, added to slab/for-5.20/cleanup as it's slab-specific and independent from 1/3 and 3/3.
mm/slab.c | 10 ---------- 1 file changed, 10 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c index b04e40078bdf..75779ac5f5ba 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) return ac->entry[--ac->avail]; }
-static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
gfp_t flags)
-{
- might_sleep_if(gfpflags_allow_blocking(flags));
-}
#if DEBUG static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep, gfp_t flags, void *objp, unsigned long caller) @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ if (unlikely(ptr)) goto out_hooks;
cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags);
if (nodeid == NUMA_NO_NODE)
@@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, if (unlikely(objp)) goto out;
- cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); objp = __do_cache_alloc(cachep, flags); local_irq_restore(save_flags);
@@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, if (!s) return 0;
- cache_alloc_debugcheck_before(s, flags);
- local_irq_disable(); for (i = 0; i < size; i++) { void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
mempool are generally used for GFP_NOIO, so this wont benefit all that much because might_alloc currently only checks GFP_NOFS. But it does validate against mmu notifier pte zapping, some might catch some drivers doing really silly things, plus it's a bit more meaningful in what we're checking for here.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org --- mm/mempool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempool.c b/mm/mempool.c index b933d0fc21b8..96488b13a1ef 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) gfp_t gfp_temp;
VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO); - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM); + might_alloc(gfp_mask);
gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */ gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
On 6/5/22 17:25, Daniel Vetter wrote:
mempool are generally used for GFP_NOIO, so this wont benefit all that much because might_alloc currently only checks GFP_NOFS. But it does validate against mmu notifier pte zapping, some might catch some drivers doing really silly things, plus it's a bit more meaningful in what we're checking for here.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka vbabka@suse.cz
mm/mempool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempool.c b/mm/mempool.c index b933d0fc21b8..96488b13a1ef 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) gfp_t gfp_temp;
VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
might_alloc(gfp_mask);
gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */ gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
On 05.06.22 17:25, Daniel Vetter wrote:
... instead of open codding it. Completely equivalent code, just a notch more meaningful when reading.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org
mm/page_alloc.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2db95780e003..277774d170cb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, *alloc_flags |= ALLOC_CPUSET; }
- fs_reclaim_acquire(gfp_mask);
- fs_reclaim_release(gfp_mask);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
might_alloc(gfp_mask);
if (should_fail_alloc_page(gfp_mask, order)) return false;
Reviewed-by: David Hildenbrand david@redhat.com
On 6/5/22 17:25, Daniel Vetter wrote:
... instead of open codding it. Completely equivalent code, just a notch more meaningful when reading.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka vbabka@suse.cz
mm/page_alloc.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2db95780e003..277774d170cb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, *alloc_flags |= ALLOC_CPUSET; }
- fs_reclaim_acquire(gfp_mask);
- fs_reclaim_release(gfp_mask);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
might_alloc(gfp_mask);
if (should_fail_alloc_page(gfp_mask, order)) return false;
dri-devel@lists.freedesktop.org