[PATCH 18/18] ttm: add support for a module option to disable memcg pool

Christian König christian.koenig at amd.com
Mon Jul 14 11:49:10 UTC 2025


On 14.07.25 07:18, Dave Airlie wrote:
> From: Dave Airlie <airlied at redhat.com>
> 
> There is an existing workload that cgroup support might regress,
> the systems are setup to allocate 1GB of uncached pages at system
> startup to prime the pool, then any further users will take them
> from the pool. The current cgroup code might handle that, but
> it also may regress, so add an option to ttm to avoid using
> memcg for the pool pages.
> 
> Signed-off-by: Dave Airlie <airlied at redhat.com>
> ---
>  drivers/gpu/drm/ttm/ttm_pool.c | 19 +++++++++++++++++--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 1e6da2cc1f06..9d84d9991176 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -118,6 +118,21 @@ static unsigned long page_pool_size;
>  MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool per NUMA node");
>  module_param(page_pool_size, ulong, 0644);

I think we need that for the whole memcg integration and off by default for now.

Regards,
Christian.

>  
> +/*
> + * Don't use the memcg aware lru for pooled pages.
> + *
> + * There are use-cases where for example one application in a cgroup will preallocate 1GB
> + * of uncached pages, and immediately release them into the pool, for other consumers
> + * to use. This use-case could be handled with a proper cgroup hierarchy, but to allow
> + * that use case to continue to operate as-is, add a module option.
> + *
> + * This still stores the pages in the list_lru, it just doesn't use the memcg when
> + * adding/removing them.
> + */
> +static bool pool_cgroup = true;
> +MODULE_PARM_DESC(pool_cgroup, "Manage pooled memory using cgroups (default: true)");
> +module_param(pool_cgroup, bool, 0444);
> +
>  static unsigned long pool_node_limit[MAX_NUMNODES];
>  static atomic_long_t allocated_pages[MAX_NUMNODES];
>  
> @@ -305,7 +320,7 @@ static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
>  
>  	INIT_LIST_HEAD(&p->lru);
>  	rcu_read_lock();
> -	list_lru_add(&pt->pages, &p->lru, nid, page_memcg_check(p));
> +	list_lru_add(&pt->pages, &p->lru, nid, pool_cgroup ? page_memcg_check(p) : NULL);
>  	rcu_read_unlock();
>  
>  	atomic_long_add(num_pages, &allocated_pages[nid]);
> @@ -354,7 +369,7 @@ static struct page *ttm_pool_type_take(struct ttm_pool_type *pt, int nid,
>  	struct page *page_out = NULL;
>  	int ret;
>  	struct mem_cgroup *orig_memcg = orig_objcg ? get_mem_cgroup_from_objcg(orig_objcg) : NULL;
> -	struct mem_cgroup *memcg = orig_memcg;
> +	struct mem_cgroup *memcg = pool_cgroup ? orig_memcg : NULL;
>  
>  	/*
>  	 * Attempt to get a page from the current memcg, but if it hasn't got any in it's level,



More information about the dri-devel mailing list