[PATCH 12/17] ttm: add objcg pointer to bo and tt
David Airlie
airlied at redhat.com
Tue Jul 1 08:06:42 UTC 2025
On Tue, Jul 1, 2025 at 5:22 PM Christian König <christian.koenig at amd.com> wrote:
>
> On 30.06.25 23:33, David Airlie wrote:
> > On Mon, Jun 30, 2025 at 8:24 PM Christian König
> > <christian.koenig at amd.com> wrote:
> >>
> >> On 30.06.25 06:49, Dave Airlie wrote:
> >>> From: Dave Airlie <airlied at redhat.com>
> >>>
> >>> This just adds the obj cgroup pointer to the bo and tt structs,
> >>> and sets it between them.
> >>>
> >>> Signed-off-by: Dave Airlie <airlied at redhat.com>
> >>> ---
> >>> drivers/gpu/drm/ttm/ttm_tt.c | 1 +
> >>> include/drm/ttm/ttm_bo.h | 6 ++++++
> >>> include/drm/ttm/ttm_tt.h | 2 ++
> >>> 3 files changed, 9 insertions(+)
> >>>
> >>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> >>> index 8f38de3b2f1c..0c54d5e2bfdd 100644
> >>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> >>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> >>> @@ -162,6 +162,7 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm,
> >>> ttm->caching = caching;
> >>> ttm->restore = NULL;
> >>> ttm->backup = NULL;
> >>> + ttm->objcg = bo->objcg;
> >>> }
> >>>
> >>> int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> >>> diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
> >>> index 099dc2604baa..f26ec0a0273f 100644
> >>> --- a/include/drm/ttm/ttm_bo.h
> >>> +++ b/include/drm/ttm/ttm_bo.h
> >>> @@ -135,6 +135,12 @@ struct ttm_buffer_object {
> >>> * reservation lock.
> >>> */
> >>> struct sg_table *sg;
> >>> +
> >>> + /**
> >>> + * @objcg: object cgroup to charge this to if it ends up using system memory.
> >>> + * NULL means don't charge.
> >>> + */
> >>> + struct obj_cgroup *objcg;
> >>> };
> >>>
> >>> #define TTM_BO_MAP_IOMEM_MASK 0x80
> >>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> >>> index 15d4019685f6..c13fea4c2915 100644
> >>> --- a/include/drm/ttm/ttm_tt.h
> >>> +++ b/include/drm/ttm/ttm_tt.h
> >>> @@ -126,6 +126,8 @@ struct ttm_tt {
> >>> enum ttm_caching caching;
> >>> /** @restore: Partial restoration from backup state. TTM private */
> >>> struct ttm_pool_tt_restore *restore;
> >>> + /** @objcg: Object cgroup for this TT allocation */
> >>> + struct obj_cgroup *objcg;
> >>> };
> >>
> >> We should probably keep that out of the pool and account the memory to the BO instead.
> >>
> >
> > I tried that like 2-3 patch posting iterations ago, you suggested it
> > then, it didn't work. It has to be done at the pool level, I think it
> > was due to swap handling.
>
> When you do it at the pool level the swap/shrink handling is broken as well, just not for amdgpu.
>
> See xe_bo_shrink() and drivers/gpu/drm/xe/xe_shrinker.c on how XE does it.
I've read all of that, but I don't think it needs changing yet, though
I do think I probably need to do a bit more work on the ttm
backup/restore paths to account things, but again we suffer from the
what happens if your cgroup runs out of space on a restore path,
similiar to eviction.
Blocking the problems we can solve now on the problems we've no idea
how to solve means nobody gets experience with solving anything.
> So the best we can do is to do it at the resource level because that is common for everybody.
>
> This doesn't takes swapping on amdgpu into account, but that should not be that relevant since we wanted to remove that and switch to the XE approach anyway.
I don't understand, we cannot do it at the resource level, I sent
patches to try, they don't fundamentally work properly, so it isn't
going to fly. We can solve it at the pool level, so we should, if we
somehow rearchitect things later to solve it at the resource level,
but I feel we'd have to make swap handling operate at the resource
level instead of tt level to have any chance.
Swapping via the backup/restore paths should be accounted properly,
since moving pages out to swap one way cgroups can reduce the memory
usage, if we can't account that swapped pages aren't removed from the
page count, then it isn't going to work properly.
Dave.
More information about the dri-devel
mailing list