[PATCH 2/2] drm/ttm: Increase pool shrinker batch target

Tvrtko Ursulin tvrtko.ursulin at igalia.com
Mon Jun 2 15:29:29 UTC 2025


The default core shrink target of 128 pages (SHRINK_BATCH) is quite low
relative to how cheap TTM pool shrinking is, and how the free pages are
distributed in page order pools.

We can make the target a bit more aggressive by making it roughly the
average number of pages across all pools, freeing more of the cached
pages every time shrinker core invokes our callback.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at igalia.com>
Cc: Christian König <christian.koenig at amd.com>
Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_pool.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index a3247a82cadd..ca6bbf9d0996 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -1270,13 +1270,17 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m)
 }
 EXPORT_SYMBOL(ttm_pool_debugfs);
 
+/* Free average pool number of pages.  */
+#define TTM_SHRINKER_BATCH ((1 << (MAX_PAGE_ORDER / 2)) * NR_PAGE_ORDERS)
+
 /* Test the shrinker functions and dump the result */
 static int ttm_pool_debugfs_shrink_show(struct seq_file *m, void *data)
 {
 	struct shrink_control sc = {
 		.gfp_mask = GFP_NOFS,
-		.nr_to_scan = 1,
+		.nr_to_scan = TTM_SHRINKER_BATCH,
 	};
+
 	fs_reclaim_acquire(GFP_KERNEL);
 	seq_printf(m, "%lu/%lu\n", ttm_pool_shrinker_count(mm_shrinker, &sc),
 		   ttm_pool_shrinker_scan(mm_shrinker, &sc));
@@ -1333,6 +1337,7 @@ int ttm_pool_mgr_init(unsigned long num_pages)
 
 	mm_shrinker->count_objects = ttm_pool_shrinker_count;
 	mm_shrinker->scan_objects = ttm_pool_shrinker_scan;
+	mm_shrinker->batch = TTM_SHRINKER_BATCH;
 	mm_shrinker->seeks = 1;
 
 	shrinker_register(mm_shrinker);
-- 
2.48.0



More information about the dri-devel mailing list