[PATCH] Revert "drm/amd/amdgpu: set gtt size according to system memory size only"
Andrey Grodzovsky
Andrey.Grodzovsky at amd.com
Sat Dec 30 20:02:04 UTC 2017
On 12/29/2017 03:19 PM, Koenig, Christian wrote:
> The difference is that the OOM killer doesn't know of the pages an
> application allocates through the driver.
>
> This results in a bad decision which process to kill.
>
> I had patches to fix this a long time ago on the list, but never found
> time to clean them up and push them upstream.
>
> Andrey is now working on this, but I don't know the status of hand.
>
> Christian.
Don't have updates since the last status I sent on this (on vacation
currently), so will just reiterate the last status here -
"Worked a bit more on this, did find a function to properly get free
swap space but this solution is not working for when evicting to swap a
BO from LRU list which is larger then available RAM (as you predicted)
and it causes OOM in the swapper work thread, as you can see from
attachment shmem_read_mapping_page will use default allocation policy
for swap pages while I think we should have used __GFP_RETRY_MAYFAIL
here and the way to do it is to use shmem_read_mapping_page_gfp which
allows to set GFP flags.
In general I think that the approach you suggested (and then one i was
advised at #mm channel) is the right one, to avoid OOM killer we should
not try to do any estimations on free RAM or SWAP since it's not
reliable any way to assume that by the time we allocate things will not
change, what we should do is to allocate pages without triggering OOM
killer. I think I should again to try and set all system page allocation
code paths we use to __GFP_RETRY_MAYFAIL and debug again why it didn't
work last time. One reason could be because i missed the SWAP pages
allocation, another is a possible memory leak when failing allocation
and rolling back all previously allocated pages for the BO, which leads
to OOM anyway. "
Thanks,
Andrey
>
> Am 29.12.2017 20:36 schrieb "Kuehling, Felix" <Felix.Kuehling at amd.com>:
>
> Is it possible that the test is broken? A test that allocates
> memory to
> exhaustion may well trigger the OOM killer. A test can do that by
> using
> malloc. Why not by using the graphics driver? The OOM killer does what
> it's supposed to do, and kills the broken application.
>
> As I understand it, this change is adds artificial limitations to
> workaround a bug in a user mode test. However, it ends up limiting the
> memory available for well behaved applications, more than necessary.
>
> For compute applications that work with huge data sets, we want to be
> able to allocate lots of system memory. Tying available system
> memory to
> the VRAM size makes no sense for compute applications that want to
> work
> with such huge data sets.
>
> Regards,
> Felix
>
>
> On 2017-12-15 02:09 PM, Andrey Grodzovsky wrote:
> > This reverts commit ba851eed895c76be0eb4260bdbeb7e26f9ccfaa2.
> > With that change piglit max size tests (running with -t
> max.*size) are causing
> > OOM and hard hang on my CZ with 1GB RAM.
> >
> > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 8 +++++---
> > 1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index c307a7d..814a9c1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -1329,9 +1329,11 @@ int amdgpu_ttm_init(struct amdgpu_device
> *adev)
> > struct sysinfo si;
> >
> > si_meminfo(&si);
> > - gtt_size = max(AMDGPU_DEFAULT_GTT_SIZE_MB << 20,
> > - (uint64_t)si.totalram * si.mem_unit * 3/4);
> > - } else
> > + gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),
> > + adev->mc.mc_vram_size),
> > + ((uint64_t)si.totalram * si.mem_unit * 3/4));
> > + }
> > + else
> > gtt_size = (uint64_t)amdgpu_gtt_size << 20;
> > r = ttm_bo_init_mm(&adev->mman.bdev, TTM_PL_TT, gtt_size
> >> PAGE_SHIFT);
> > if (r) {
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20171230/72cec3d5/attachment-0001.html>
-------------- next part --------------
[ 699.106953 < 93.127532>] [drm] free swap 2116284416
[ 727.219809 < 28.112856>] [drm] free swap 2116284416
[ 747.115235 < 19.895426>] [drm] free swap 1579937792
[ 748.741061 < 1.625826>] kworker/u8:3 invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
[ 748.741065 < 0.000004>] kworker/u8:3 cpuset=/ mems_allowed=0
[ 748.741078 < 0.000013>] CPU: 3 PID: 157 Comm: kworker/u8:3 Tainted: G W OE 4.15.0-rc2+ #7
[ 748.741080 < 0.000002>] Hardware name: AMD Gardenia/Gardenia, BIOS RGA1101C 07/20/2015
[ 748.741106 < 0.000026>] Workqueue: ttm_swap ttm_shrink_work [ttm]
[ 748.741108 < 0.000002>] Call Trace:
[ 748.741120 < 0.000012>] dump_stack+0x5c/0x78
[ 748.741127 < 0.000007>] dump_header+0xc0/0x448
[ 748.741133 < 0.000006>] ? task_will_free_mem+0x98/0x200
[ 748.741138 < 0.000005>] ? _raw_spin_trylock+0xe/0x30
[ 748.741142 < 0.000004>] ? ___ratelimit+0x105/0x190
[ 748.741147 < 0.000005>] oom_kill_process+0x369/0x6b0
[ 748.741151 < 0.000004>] ? oom_badness+0x1a0/0x220
[ 748.741156 < 0.000005>] ? oom_evaluate_task+0x183/0x1f0
[ 748.741161 < 0.000005>] out_of_memory+0x1de/0x790
[ 748.741166 < 0.000005>] ? oom_killer_disable+0x170/0x170
[ 748.741171 < 0.000005>] ? cpumask_next+0x16/0x20
[ 748.741176 < 0.000005>] ? zone_reclaimable_pages+0x24f/0x290
[ 748.741180 < 0.000004>] ? __zone_watermark_ok+0xae/0x200
[ 748.741186 < 0.000006>] __alloc_pages_slowpath+0x11db/0x12a0
[ 748.741194 < 0.000008>] ? warn_alloc+0x210/0x210
[ 748.741199 < 0.000005>] ? radix_tree_node_alloc.constprop.18+0xc6/0x150
[ 748.741205 < 0.000006>] ? kasan_kmalloc+0xa6/0xd0
[ 748.741211 < 0.000006>] __alloc_pages_nodemask+0x37f/0x3c0
[ 748.741217 < 0.000006>] ? __alloc_pages_slowpath+0x12a0/0x12a0
[ 748.741222 < 0.000005>] ? __radix_tree_insert+0x2f7/0x340
[ 748.741228 < 0.000006>] alloc_pages_vma+0x83/0x280
[ 748.741234 < 0.000006>] shmem_alloc_page+0xbc/0x110
[ 748.741239 < 0.000005>] ? shmem_swapin+0x110/0x110
[ 748.741243 < 0.000004>] ? release_pages+0x431/0x570
[ 748.741247 < 0.000004>] ? __radix_tree_lookup+0x122/0x150
[ 748.741253 < 0.000006>] ? radix_tree_lookup_slot+0x53/0x90
[ 748.741259 < 0.000006>] shmem_alloc_and_acct_page+0xc0/0x300
[ 748.741265 < 0.000006>] shmem_getpage_gfp.isra.35+0x225/0x1110
[ 748.741272 < 0.000007>] ? __mutex_init+0x4c/0x60
[ 748.741277 < 0.000005>] ? shmem_add_to_page_cache+0x470/0x470
[ 748.741282 < 0.000005>] ? __d_instantiate+0x122/0x160
[ 748.741287 < 0.000005>] ? alloc_file+0x155/0x1c0
[ 748.741292 < 0.000005>] ? __shmem_file_setup+0x174/0x300
[ 748.741298 < 0.000006>] shmem_read_mapping_page_gfp+0x94/0xe0
[ 748.741304 < 0.000006>] ? shmem_getpage_gfp.isra.35+0x1110/0x1110
[ 748.741308 < 0.000004>] ? page_mapping+0x9b/0x110
[ 748.741312 < 0.000004>] ? mark_page_accessed+0xa8/0x1e0
[ 748.741331 < 0.000019>] ttm_tt_swapout+0x105/0x380 [ttm]
[ 748.741354 < 0.000023>] ttm_bo_swapout+0x34b/0x380 [ttm]
[ 748.741376 < 0.000022>] ? ttm_bo_unmap_virtual+0x50/0x50 [ttm]
[ 748.741382 < 0.000006>] ? sched_clock+0x5/0x10
[ 748.741387 < 0.000005>] ? vtime_account_idle+0x67/0x70
[ 748.741392 < 0.000005>] ? __schedule+0x68b/0xcf0
[ 748.741412 < 0.000020>] ttm_shrink+0xfd/0x130 [ttm]
[ 748.741418 < 0.000006>] process_one_work+0x2a2/0x6d0
[ 748.741423 < 0.000005>] worker_thread+0x87/0x770
[ 748.741430 < 0.000007>] kthread+0x174/0x1c0
[ 748.741434 < 0.000004>] ? process_one_work+0x6d0/0x6d0
[ 748.741439 < 0.000005>] ? kthread_associate_blkcg+0x130/0x130
[ 748.741444 < 0.000005>] ret_from_fork+0x1f/0x30
[ 748.741447 < 0.000003>] Mem-Info:
[ 748.741458 < 0.000011>] active_anon:0 inactive_anon:0 isolated_anon:0
active_file:71 inactive_file:40 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:5419 slab_unreclaimable:15358
mapped:151 shmem:2 pagetables:1502 bounce:0
free:12384 free_pcp:2 free_cma:0
[ 748.741466 < 0.000008>] Node 0 active_anon:0kB inactive_anon:0kB active_file:284kB inactive_file:160kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:604kB dirty:0kB writeback:0kB shmem:8kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[ 748.741468 < 0.000002>] Node 0 DMA free:5096kB min:612kB low:764kB high:916kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 748.741478 < 0.000010>] lowmem_reserve[]: 0 1126 1126 1126
[ 748.741483 < 0.000005>] Node 0 DMA32 free:44440kB min:44440kB low:55548kB high:66656kB active_anon:0kB inactive_anon:76kB active_file:760kB inactive_file:224kB unevictable:0kB writepending:0kB present:2080764kB managed:1153804kB mlocked:0kB kernel_stack:5120kB pagetables:6008kB bounce:0kB free_pcp:8kB local_pcp:0kB free_cma:0kB
[ 748.741493 < 0.000010>] lowmem_reserve[]: 0 0 0 0
[ 748.741498 < 0.000005>] Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 1*64kB (U) 1*128kB (U) 1*256kB (U) 1*512kB (U) 2*1024kB (U) 1*2048kB (M) 0*4096kB = 5096kB
[ 748.741521 < 0.000023>] Node 0 DMA32: 297*4kB (UME) 390*8kB (ME) 473*16kB (MEH) 346*32kB (UME) 172*64kB (UM) 53*128kB (UM) 12*256kB (UM) 3*512kB (UM) 0*1024kB 0*2048kB 0*4096kB = 45348kB
[ 748.741629 < 0.000108>] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 748.741633 < 0.000004>] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 748.741634 < 0.000001>] 268 total pagecache pages
[ 748.741637 < 0.000003>] 15 pages in swap cache
[ 748.741641 < 0.000004>] Swap cache stats: add 983137, delete 983101, find 1942/4576
[ 748.741642 < 0.000001>] Free swap = 1357820kB
[ 748.741644 < 0.000002>] Total swap = 2092028kB
[ 748.741645 < 0.000001>] 524189 pages RAM
[ 748.741647 < 0.000002>] 0 pages HighMem/MovableOnly
[ 748.741648 < 0.000001>] 231761 pages reserved
[ 748.741651 < 0.000003>] 0 pages cma reserved
[ 748.741652 < 0.000001>] 0 pages hwpoisoned
[ 748.741653 < 0.000001>] Unreclaimable slab info:
[ 748.741655 < 0.000002>] Name Used
More information about the amd-gfx
mailing list