Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com --- include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@ * top of the memory area, instead of the bottom. */
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/** * struct ttm_place
To improve buffer migration performace, stop bookkeeping of temporary GTT allocation, including allocation for BO evicted from VRAM and bounce buffer.
Signed-off-by: Lang Yu Lang.Yu@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 16 ++++++++++------ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +++- 2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 8860545344c7..32fedd495c7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -111,14 +111,15 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, struct amdgpu_gtt_node *node; int r;
- spin_lock(&mgr->lock); - if ((&tbo->mem == mem || tbo->mem.mem_type != TTM_PL_TT) && - atomic64_read(&mgr->available) < mem->num_pages) { + if (!(mem->placement & TTM_PL_FLAG_TEMPORARY)) { + spin_lock(&mgr->lock); + if (atomic64_read(&mgr->available) < mem->num_pages) { + spin_unlock(&mgr->lock); + return -ENOSPC; + } + atomic64_sub(mem->num_pages, &mgr->available); spin_unlock(&mgr->lock); - return -ENOSPC; } - atomic64_sub(mem->num_pages, &mgr->available); - spin_unlock(&mgr->lock);
if (!place->lpfn) { mem->mm_node = NULL; @@ -178,6 +179,9 @@ static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, kfree(node); }
+ if (mem->placement & TTM_PL_FLAG_TEMPORARY) + return; + atomic64_add(mem->num_pages, &mgr->available); }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index c0aef327292a..129d39392859 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -152,9 +152,11 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo, abo->placements[0].lpfn = 0; abo->placement.busy_placement = &abo->placements[1]; abo->placement.num_busy_placement = 1; + abo->placements[1].flags |= TTM_PL_FLAG_TEMPORARY; } else { /* Move to GTT memory */ amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT); + abo->placements[0].flags |= TTM_PL_FLAG_TEMPORARY; } break; case TTM_PL_TT: @@ -538,7 +540,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, hop->fpfn = 0; hop->lpfn = 0; hop->mem_type = TTM_PL_TT; - hop->flags = 0; + hop->flags |= TTM_PL_FLAG_TEMPORARY; return -EMULTIHOP; }
Puttin Marek on CC.
Am 27.05.21 um 03:30 schrieb Lang Yu:
To improve buffer migration performace, stop bookkeeping of temporary GTT allocation, including allocation for BO evicted from VRAM and bounce buffer.
Signed-off-by: Lang Yu Lang.Yu@amd.com
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 16 ++++++++++------ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +++- 2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 8860545344c7..32fedd495c7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -111,14 +111,15 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, struct amdgpu_gtt_node *node; int r;
- spin_lock(&mgr->lock);
- if ((&tbo->mem == mem || tbo->mem.mem_type != TTM_PL_TT) &&
atomic64_read(&mgr->available) < mem->num_pages) {
- if (!(mem->placement & TTM_PL_FLAG_TEMPORARY)) {
spin_lock(&mgr->lock);
if (atomic64_read(&mgr->available) < mem->num_pages) {
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
After sleeping a night over that I think we need to talk about this part here once more.
While temporary GTT allocations can temporary exceed the GTT limitation we still need to account them in the case the eviction is interrupted for some reason.
In other words what can happen is that we want to move VRAM->GTT->SYSTEM, but GTT->SYSTEM never happens because it is interrupted in the wait (that's unfortunately rather likely).
To solve this I think we should do the following: 1. Change mgr->available into mgr->used (e.g. invert the value). 2. Always account all GTT BOs to the used space. 3. Only when it is not a temporary allocation bail out.
This way temporary allocations are accounted for, but we still allow memory evictions to happen under pressure.
While at it you could also drop taking the spinlock to check the atomic, that is pretty much unnecessary.
Regards, Christian.
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
spin_unlock(&mgr->lock);
if (!place->lpfn) { mem->mm_node = NULL;
@@ -178,6 +179,9 @@ static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, kfree(node); }
- if (mem->placement & TTM_PL_FLAG_TEMPORARY)
return;
- atomic64_add(mem->num_pages, &mgr->available); }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index c0aef327292a..129d39392859 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -152,9 +152,11 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo, abo->placements[0].lpfn = 0; abo->placement.busy_placement = &abo->placements[1]; abo->placement.num_busy_placement = 1;
} else { /* Move to GTT memory */ amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT);abo->placements[1].flags |= TTM_PL_FLAG_TEMPORARY;
} break; case TTM_PL_TT:abo->placements[0].flags |= TTM_PL_FLAG_TEMPORARY;
@@ -538,7 +540,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, hop->fpfn = 0; hop->lpfn = 0; hop->mem_type = TTM_PL_TT;
hop->flags = 0;
}hop->flags |= TTM_PL_FLAG_TEMPORARY; return -EMULTIHOP;
[AMD Official Use Only]
Inline.
-----Original Message----- From: Koenig, Christian Christian.Koenig@amd.com Sent: Thursday, May 27, 2021 7:51 PM To: Yu, Lang Lang.Yu@amd.com; amd-gfx@lists.freedesktop.org; dri- devel@lists.freedesktop.org Cc: Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Olsak, Marek Marek.Olsak@amd.com Subject: Re: [PATCH 2/2] drm/amdgpu: stop bookkeeping of temporary GTT allocation
Puttin Marek on CC.
Am 27.05.21 um 03:30 schrieb Lang Yu:
To improve buffer migration performace, stop bookkeeping of temporary GTT allocation, including allocation for BO evicted from VRAM and bounce buffer.
Signed-off-by: Lang Yu Lang.Yu@amd.com
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 16 ++++++++++------ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +++- 2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 8860545344c7..32fedd495c7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -111,14 +111,15 @@ static int amdgpu_gtt_mgr_new(struct
ttm_resource_manager *man,
struct amdgpu_gtt_node *node; int r;
- spin_lock(&mgr->lock);
- if ((&tbo->mem == mem || tbo->mem.mem_type != TTM_PL_TT) &&
atomic64_read(&mgr->available) < mem->num_pages) {
- if (!(mem->placement & TTM_PL_FLAG_TEMPORARY)) {
spin_lock(&mgr->lock);
if (atomic64_read(&mgr->available) < mem->num_pages) {
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
After sleeping a night over that I think we need to talk about this part here once more.
While temporary GTT allocations can temporary exceed the GTT limitation we still need to account them in the case the eviction is interrupted for some reason.
In other words what can happen is that we want to move VRAM->GTT->SYSTEM, but GTT->SYSTEM never happens because it is interrupted in the wait (that's unfortunately rather likely).
To solve this I think we should do the following:
- Change mgr->available into mgr->used (e.g. invert the value).
- Always account all GTT BOs to the used space.
- Only when it is not a temporary allocation bail out.
This way temporary allocations are accounted for, but we still allow memory evictions to happen under pressure.
While at it you could also drop taking the spinlock to check the atomic, that is pretty much unnecessary.
Regards, Christian.
[Yu, Lang] Hi Christian,
Yes, it can actually happen that the BO was evicted from VRAM to GTT domain, but was not moved forward to SYSTEM domain. It resides in GTT domain waiting for next time validation or eviction or destruction.
It is reasonable that we count all GTT allocation. 1, I find if the temporary GTT BO was not counted but used for command submission, then we can use more GTT memory than GTT limit for command submission. Is that your concern? 2, Or if we don't count temporary GTT allocation, that will mess up gtt manager.
In other words, if we don't count it when it resides in GTT domain, what is the consequence? Would like to know your concern. Actually it is counted by ttm_pages_allocated.
If we use "used" instead of "available" in gtt manager, the used size may exceed man size. We should also deal with gtt mgr debug interface.
Rework the logic like this with your idea: if ((atomic64_add_return(mem->num_pages, &mgr->used) > man->size) && !(mem->placement & TTM_PL_FLAG_TEMPORARY)) { atomic64_sub(mem->num_pages, &mgr->used); return -ENOSPC; }
Regards, Lang
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
spin_unlock(&mgr->lock);
if (!place->lpfn) { mem->mm_node = NULL;
@@ -178,6 +179,9 @@ static void amdgpu_gtt_mgr_del(struct
ttm_resource_manager *man,
kfree(node);
}
- if (mem->placement & TTM_PL_FLAG_TEMPORARY)
return;
- atomic64_add(mem->num_pages, &mgr->available); }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c0aef327292a..129d39392859 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -152,9 +152,11 @@ static void amdgpu_evict_flags(struct
ttm_buffer_object *bo,
abo->placements[0].lpfn = 0; abo->placement.busy_placement = &abo-
placements[1]; abo->placement.num_busy_placement = 1;
abo->placements[1].flags |=
TTM_PL_FLAG_TEMPORARY;
} else { /* Move to GTT memory */ amdgpu_bo_placement_from_domain(abo,
AMDGPU_GEM_DOMAIN_GTT);
abo->placements[0].flags |=
TTM_PL_FLAG_TEMPORARY;
} break;
case TTM_PL_TT: @@ -538,7 +540,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object
*bo, bool evict,
hop->fpfn = 0; hop->lpfn = 0; hop->mem_type = TTM_PL_TT;
hop->flags = 0;
}hop->flags |= TTM_PL_FLAG_TEMPORARY; return -EMULTIHOP;
Am 28.05.21 um 11:47 schrieb Yu, Lang:
[AMD Official Use Only]
Inline.
-----Original Message----- From: Koenig, Christian Christian.Koenig@amd.com Sent: Thursday, May 27, 2021 7:51 PM To: Yu, Lang Lang.Yu@amd.com; amd-gfx@lists.freedesktop.org; dri- devel@lists.freedesktop.org Cc: Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Olsak, Marek Marek.Olsak@amd.com Subject: Re: [PATCH 2/2] drm/amdgpu: stop bookkeeping of temporary GTT allocation
Puttin Marek on CC.
Am 27.05.21 um 03:30 schrieb Lang Yu:
To improve buffer migration performace, stop bookkeeping of temporary GTT allocation, including allocation for BO evicted from VRAM and bounce buffer.
Signed-off-by: Lang Yu Lang.Yu@amd.com
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 16 ++++++++++------ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 4 +++- 2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 8860545344c7..32fedd495c7f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -111,14 +111,15 @@ static int amdgpu_gtt_mgr_new(struct
ttm_resource_manager *man,
struct amdgpu_gtt_node *node; int r;
- spin_lock(&mgr->lock);
- if ((&tbo->mem == mem || tbo->mem.mem_type != TTM_PL_TT) &&
atomic64_read(&mgr->available) < mem->num_pages) {
- if (!(mem->placement & TTM_PL_FLAG_TEMPORARY)) {
spin_lock(&mgr->lock);
if (atomic64_read(&mgr->available) < mem->num_pages) {
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
After sleeping a night over that I think we need to talk about this part here once more.
While temporary GTT allocations can temporary exceed the GTT limitation we still need to account them in the case the eviction is interrupted for some reason.
In other words what can happen is that we want to move VRAM->GTT->SYSTEM, but GTT->SYSTEM never happens because it is interrupted in the wait (that's unfortunately rather likely).
To solve this I think we should do the following:
- Change mgr->available into mgr->used (e.g. invert the value).
- Always account all GTT BOs to the used space.
- Only when it is not a temporary allocation bail out.
This way temporary allocations are accounted for, but we still allow memory evictions to happen under pressure.
While at it you could also drop taking the spinlock to check the atomic, that is pretty much unnecessary.
Regards, Christian.
[Yu, Lang] Hi Christian,
Yes, it can actually happen that the BO was evicted from VRAM to GTT domain, but was not moved forward to SYSTEM domain. It resides in GTT domain waiting for next time validation or eviction or destruction.
It is reasonable that we count all GTT allocation. 1, I find if the temporary GTT BO was not counted but used for command submission, then we can use more GTT memory than GTT limit for command submission. Is that your concern?
Yes, exactly that.
2, Or if we don't count temporary GTT allocation, that will mess up gtt manager.
In other words, if we don't count it when it resides in GTT domain, what is the consequence?
The GTT size is the limit how much system memory userspace can intentionally allocate.
This works around stupid applications which tend to allocate as much memory as possible (without actually needing that much) and then triggering the OOM killer.
Would like to know your concern. Actually it is counted by ttm_pages_allocated.
If we use "used" instead of "available" in gtt manager, the used size may exceed man size.
Yes, that is intentional.
We should also deal with gtt mgr debug interface.
Rework the logic like this with your idea:
if ((atomic64_add_return(mem->num_pages, &mgr->used) > man->size) && !(mem->placement & TTM_PL_FLAG_TEMPORARY)) { atomic64_sub(mem->num_pages, &mgr->used); return -ENOSPC; }
Yeah, something like that.
Regards, Christian.
Regards, Lang
spin_unlock(&mgr->lock);
return -ENOSPC;
}
atomic64_sub(mem->num_pages, &mgr->available);
spin_unlock(&mgr->lock);
if (!place->lpfn) { mem->mm_node = NULL;
@@ -178,6 +179,9 @@ static void amdgpu_gtt_mgr_del(struct
ttm_resource_manager *man,
kfree(node); }
- if (mem->placement & TTM_PL_FLAG_TEMPORARY)
return;
- atomic64_add(mem->num_pages, &mgr->available); }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c0aef327292a..129d39392859 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -152,9 +152,11 @@ static void amdgpu_evict_flags(struct
ttm_buffer_object *bo,
abo->placements[0].lpfn = 0; abo->placement.busy_placement = &abo-
placements[1]; abo->placement.num_busy_placement = 1;
abo->placements[1].flags |=
TTM_PL_FLAG_TEMPORARY;
} else { /* Move to GTT memory */ amdgpu_bo_placement_from_domain(abo,
AMDGPU_GEM_DOMAIN_GTT);
abo->placements[0].flags |=
TTM_PL_FLAG_TEMPORARY;
} break; case TTM_PL_TT:
@@ -538,7 +540,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object
*bo, bool evict,
hop->fpfn = 0; hop->lpfn = 0; hop->mem_type = TTM_PL_TT;
hop->flags = 0;
hop->flags |= TTM_PL_FLAG_TEMPORARY; return -EMULTIHOP; }
Hi,
On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Thomas
Am 27.05.21 um 03:30 schrieb Lang Yu:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
Reviewed-by: Christian König christian.koenig@amd.com for this patch here.
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@
- top of the memory area, instead of the bottom.
*/
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/**
- struct ttm_place
[Public]
Hi,
On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
Thomas
-----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@
- top of the memory area, instead of the bottom.
*/
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/**
- struct ttm_place
-- 2.25.1
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
Hi, On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
Thomas -----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@
- top of the memory area, instead of the bottom.
*/
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/**
- struct ttm_place
-- 2.25.1
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
Hi, On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
Christian.
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
Thomas -----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@ * top of the memory area, instead of the bottom. */
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/** * struct ttm_place -- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On 5/31/21 12:32 PM, Christian König wrote:
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
Hi, On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
OK, but it looked like there were actually two use-cases. One for possibly long-term VRAM evictions to GTT, the other for the temporary use-case where the hop resource allocations sometimes failed. Or did I misunderstand the code?
/Thomas
Christian.
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
Thomas -----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@ * top of the memory area, instead of the bottom. */
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/** * struct ttm_place -- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Am 31.05.21 um 12:46 schrieb Thomas Hellström (Intel):
On 5/31/21 12:32 PM, Christian König wrote:
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
Hi, On 5/27/21 3:30 AM, Lang Yu wrote:
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
OK, but it looked like there were actually two use-cases. One for possibly long-term VRAM evictions to GTT, the other for the temporary use-case where the hop resource allocations sometimes failed. Or did I misunderstand the code?
Ok sounds like we need more documentation here. That's really one use case.
Key point is we need temporary allocation during multihop which should be handled differently to normal allocations.
When Lang is ok with that I think I will pick up his patches and document them a bit.
Christian.
/Thomas
Christian.
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
Thomas -----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@ * top of the memory area, instead of the bottom. */
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/** * struct ttm_place -- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On 5/31/21 12:56 PM, Christian König wrote:
Am 31.05.21 um 12:46 schrieb Thomas Hellström (Intel):
On 5/31/21 12:32 PM, Christian König wrote:
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
Hi, On 5/27/21 3:30 AM, Lang Yu wrote: > Make TTM_PL_FLAG_* start from zero and add > TTM_PL_FLAG_TEMPORARY flag for temporary > GTT allocation use. GTT is a driver private acronym, right? And it doesn't look like TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we instead set aside a mask in the PL flag for driver-private use?
Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
OK, but it looked like there were actually two use-cases. One for possibly long-term VRAM evictions to GTT, the other for the temporary use-case where the hop resource allocations sometimes failed. Or did I misunderstand the code?
Ok sounds like we need more documentation here. That's really one use case.
Key point is we need temporary allocation during multihop which should be handled differently to normal allocations.
Yes, that part is clear from the patches. The part that I can't fit into that pattern is why the evict flags when evicting from visible VRAM to GTT or ordinary VRAM is marked with TTM_PL_FLAG_TEMPORARY. Wouldn't those remain evicted in that placement until re-validated to visible VRAM at an unknown future time?
Patch 3/3:
amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT); abo->placements[0].flags |= TTM_PL_FLAG_TEMPORARY;
Christian.
/Thomas
Christian.
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
Thomas -----Original Message----- From: Yu, Lang Lang.Yu@amd.com Sent: Thursday, May 27, 2021 9:31 AM To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray Ray.Huang@amd.com; Deucher, Alexander Alexander.Deucher@amd.com; Yu, Lang Lang.Yu@amd.com Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY
Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY flag for temporary GTT allocation use.
Signed-off-by: Lang Yu Lang.Yu@amd.com
include/drm/ttm/ttm_placement.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index aa6ba4d0cf78..9f5cfc7c2d5a 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -47,8 +47,9 @@ * top of the memory area, instead of the bottom. */
-#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) -#define TTM_PL_FLAG_TOPDOWN (1 << 22) +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) +#define TTM_PL_FLAG_TOPDOWN (1 << 1) +#define TTM_PL_FLAG_TEMPORARY (1 << 2)
/** * struct ttm_place -- 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
Am 31.05.21 um 13:19 schrieb Thomas Hellström (Intel):
On 5/31/21 12:56 PM, Christian König wrote:
Am 31.05.21 um 12:46 schrieb Thomas Hellström (Intel):
On 5/31/21 12:32 PM, Christian König wrote:
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote:
[Public]
> Hi, > On 5/27/21 3:30 AM, Lang Yu wrote: >> Make TTM_PL_FLAG_* start from zero and add >> TTM_PL_FLAG_TEMPORARY flag for temporary >> GTT allocation use. > GTT is a driver private acronym, right? And it doesn't look like > TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we > instead set > aside a mask in the PL flag for driver-private use? Hi Thomas,
Thanks for your comments and advice, GTT means Graphics Translation Table here, seems a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by other drives. I have made other patches for this. Please help review.
Regards, Lang
My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
OK, but it looked like there were actually two use-cases. One for possibly long-term VRAM evictions to GTT, the other for the temporary use-case where the hop resource allocations sometimes failed. Or did I misunderstand the code?
Ok sounds like we need more documentation here. That's really one use case.
Key point is we need temporary allocation during multihop which should be handled differently to normal allocations.
Yes, that part is clear from the patches. The part that I can't fit into that pattern is why the evict flags when evicting from visible VRAM to GTT or ordinary VRAM is marked with TTM_PL_FLAG_TEMPORARY. Wouldn't those remain evicted in that placement until re-validated to visible VRAM at an unknown future time?
Not necessarily.
The situation we ran into was the following: 1. OOM on VRAM, we try to evict.
2. GTT space is used up as well, ok evict directly to SYSTEM.
3. For VRAM->SYSTEM eviction we use a temporary bounce buffer.
4. Waiting for the bounce buffer to become idle is interrupted by a signal so BO is still backed by bounce buffer.
5. Next CS, BO is validated with VRAM|GTT. TTM sees that it is in GTT and doesn't move the buffer.
6. CS goes into nirvana because bounce buffers are not meant to be used for CS (we can ignore alignment and accounting for them).
Patch 3/3:
amdgpu_bo_placement_from_domain(abo, AMDGPU_GEM_DOMAIN_GTT); abo->placements[0].flags |= TTM_PL_FLAG_TEMPORARY;
Thanks for pointing that out, this is indeed still the wrong place. Going to fix that.
Christian.
Christian.
/Thomas
Christian.
TTM_PL_FLAG_FORCE_MOVE
and
AMDGPU_PL_FLAG_TEMPORARY
Thanks,
/Thomas
> Thomas > -----Original Message----- > From: Yu, Lang Lang.Yu@amd.com > Sent: Thursday, May 27, 2021 9:31 AM > To: amd-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org > Cc: Koenig, Christian Christian.Koenig@amd.com; Huang, Ray > Ray.Huang@amd.com; Deucher, Alexander > Alexander.Deucher@amd.com; > Yu, Lang Lang.Yu@amd.com > Subject: [PATCH 1/2] drm/ttm: cleanup and add TTM_PL_FLAG_TEMPORARY > > Make TTM_PL_FLAG_* start from zero and add TTM_PL_FLAG_TEMPORARY > flag > for temporary GTT allocation use. > > Signed-off-by: Lang Yu Lang.Yu@amd.com > --- > include/drm/ttm/ttm_placement.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/include/drm/ttm/ttm_placement.h > b/include/drm/ttm/ttm_placement.h index > aa6ba4d0cf78..9f5cfc7c2d5a 100644 > --- a/include/drm/ttm/ttm_placement.h > +++ b/include/drm/ttm/ttm_placement.h > @@ -47,8 +47,9 @@ > * top of the memory area, instead of the bottom. > */ > > -#define TTM_PL_FLAG_CONTIGUOUS (1 << 19) > -#define TTM_PL_FLAG_TOPDOWN (1 << 22) > +#define TTM_PL_FLAG_CONTIGUOUS (1 << 0) > +#define TTM_PL_FLAG_TOPDOWN (1 << 1) > +#define TTM_PL_FLAG_TEMPORARY (1 << 2) > > /** > * struct ttm_place > -- > 2.25.1
amd-gfx mailing list amd-gfx@lists.freedesktop.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.free...
On 5/31/21 2:02 PM, Christian König wrote:
Am 31.05.21 um 13:19 schrieb Thomas Hellström (Intel):
On 5/31/21 12:56 PM, Christian König wrote:
Am 31.05.21 um 12:46 schrieb Thomas Hellström (Intel):
On 5/31/21 12:32 PM, Christian König wrote:
Am 31.05.21 um 11:52 schrieb Thomas Hellström (Intel):
Hi, Lang,
On 5/31/21 10:19 AM, Yu, Lang wrote: > [Public] > >> Hi, >> On 5/27/21 3:30 AM, Lang Yu wrote: >>> Make TTM_PL_FLAG_* start from zero and add >>> TTM_PL_FLAG_TEMPORARY flag for temporary >>> GTT allocation use. >> GTT is a driver private acronym, right? And it doesn't look like >> TTM_PL_FLAG_TEMPORARY will be used in core TTM, so should we >> instead set >> aside a mask in the PL flag for driver-private use? > Hi Thomas, > > Thanks for your comments and advice, GTT means Graphics > Translation Table here, seems > a general acronym. TTM_PL_FLAG_TEMPORARY may also be used by > other drives. > I have made other patches for this. Please help review. > > Regards, > Lang > My point was really that the flag naming and documentation should reflect what core ttm is doing with that flag. If there is no specific core TTM usage, IMO we should move it to a driver specific flag to avoid future confusion. In particular a writer of a generic TTM resource manager should be able to know without looking at an old commit message what the placement flag is intended for.
So here we add a flag where core TTM forces a bo move on validate and that's it. And that appears to be how it's used when an amdgpu bo is evicted to GTT, (btw should it be accounted in this situation?)
The other use is to force the amdgpu driver to temporarily accept it into GTT when there is a lack of space, and IMHO that's a driver specific use and we should allocate a mask for driver specific flags for that.
So shouldn't this be two flags, really?
Well one flag makes sense for the use case at hand that drivers want to signal to TTM that an allocation is only temporary and not considered valid.
That we then use this flag to implement temporary GTT allocations to avoid problems during eviction is just extending that use case.
OK, but it looked like there were actually two use-cases. One for possibly long-term VRAM evictions to GTT, the other for the temporary use-case where the hop resource allocations sometimes failed. Or did I misunderstand the code?
Ok sounds like we need more documentation here. That's really one use case.
Key point is we need temporary allocation during multihop which should be handled differently to normal allocations.
Yes, that part is clear from the patches. The part that I can't fit into that pattern is why the evict flags when evicting from visible VRAM to GTT or ordinary VRAM is marked with TTM_PL_FLAG_TEMPORARY. Wouldn't those remain evicted in that placement until re-validated to visible VRAM at an unknown future time?
Not necessarily.
The situation we ran into was the following:
OOM on VRAM, we try to evict.
GTT space is used up as well, ok evict directly to SYSTEM.
For VRAM->SYSTEM eviction we use a temporary bounce buffer.
Waiting for the bounce buffer to become idle is interrupted by a
signal so BO is still backed by bounce buffer.
- Next CS, BO is validated with VRAM|GTT. TTM sees that it is in GTT
and doesn't move the buffer.
- CS goes into nirvana because bounce buffers are not meant to be
used for CS (we can ignore alignment and accounting for them).
Yes, makes sense to me.
/Thomas
dri-devel@lists.freedesktop.org