[PATCH] mm/swap: add function get_total_swap_pages to expose total_swap_pages

He, Roger Hongbo.He at amd.com
Wed Jan 31 05:52:26 UTC 2018


	I do think you should completely ignore the size of the swap space. IMHO you should forbid further allocations when your current 	buffer storage cannot be reclaimed. So you need some form of feedback mechanism that would tell you: "Your buffers have 	grown too much". If you cannot do that then simply assume that you cannot swap at all rather than rely on having some portion 	of it for yourself. 

If we assume the swap cache size is zero always, that is overkill for GTT size actually user can get. And not make sense as well I think.

	There are many other users of memory outside of your subsystem. Any scaling based on the 50% of resource belonging to me is 	simply broken.

And that is only a threshold to avoid  overuse  rather than really reserved to TTM at the start. In addition, for most cases TTM only uses a little or not use swap disk at all. Only special test case use more or probably that is intentional.


Thanks
Roger(Hongbo.He)

-----Original Message-----
From: Michal Hocko [mailto:mhocko at kernel.org] 
Sent: Tuesday, January 30, 2018 8:29 PM
To: Koenig, Christian <Christian.Koenig at amd.com>
Cc: He, Roger <Hongbo.He at amd.com>; linux-mm at kvack.org; linux-kernel at vger.kernel.org; dri-devel at lists.freedesktop.org
Subject: Re: [PATCH] mm/swap: add function get_total_swap_pages to expose total_swap_pages

On Tue 30-01-18 11:32:49, Christian König wrote:
> Am 30.01.2018 um 11:18 schrieb Michal Hocko:
> > On Tue 30-01-18 10:00:07, Christian König wrote:
> > > Am 30.01.2018 um 08:55 schrieb Michal Hocko:
> > > > On Tue 30-01-18 02:56:51, He, Roger wrote:
> > > > > Hi Michal:
> > > > > 
> > > > > We need a API to tell TTM module the system totally has how 
> > > > > many swap cache.  Then TTM module can use it to restrict how 
> > > > > many the swap cache it can use to prevent triggering OOM.  For 
> > > > > Now we set the threshold of swap size TTM used as 1/2 * total 
> > > > > size and leave the rest for others use.
> > > > Why do you so much memory? Are you going to use TB of memory on 
> > > > large systems? What about memory hotplug when the memory is added/released?
> > > For graphics and compute applications on GPUs it isn't unusual to 
> > > use large amounts of system memory.
> > > 
> > > Our standard policy in TTM is to allow 50% of system memory to be 
> > > pinned for use with GPUs (the hardware can't do page faults).
> > > 
> > > When that limit is exceeded (or the shrinker callbacks tell us to 
> > > make room) we wait for any GPU work to finish and copy buffer 
> > > content into a shmem file.
> > > 
> > > This copy into a shmem file can easily trigger the OOM killer if 
> > > there isn't any swap space left and that is something we want to avoid.
> > > 
> > > So what we want to do is to apply this 50% rule to swap space as 
> > > well and deny allocation of buffer objects when it is exceeded.
> > How does that help when the rest of the system might eat swap?
> 
> Well it doesn't, but that is not the problem here.
> 
> When an application keeps calling malloc() it sooner or later is 
> confronted with an OOM killer.
> 
> But when it keeps for example allocating OpenGL textures the 
> expectation is that this sooner or later starts to fail because we run 
> out of memory and not trigger the OOM killer.

There is nothing like running out of memory and not triggering the OOM killer. You can make a _particular_ allocation to bail out without the oom killer. Just use __GFP_NORETRY. But that doesn't make much difference when you have already depleted your memory and live with the bare remainings. Any desperate soul trying to get its memory will simply trigger the OOM.

> So what we do is to allow the application to use all of video memory + 
> a certain amount of system memory + swap space as last resort fallback (e.g.
> when you Alt+Tab from your full screen game back to your browser).
> 
> The problem we try to solve is that we haven't limited the use of swap 
> space somehow.

I do think you should completely ignore the size of the swap space. IMHO you should forbid further allocations when your current buffer storage cannot be reclaimed. So you need some form of feedback mechanism that would tell you: "Your buffers have grown too much". If you cannot do that then simply assume that you cannot swap at all rather than rely on having some portion of it for yourself. There are many other users of memory outside of your subsystem. Any scaling based on the 50% of resource belonging to me is simply broken.
--
Michal Hocko
SUSE Labs


More information about the dri-devel mailing list