[Intel-gfx] [PATCH v3] mm, drm/i915: mark pinned shmemfs pages as unevictable

Vovo Yang vovoy at chromium.org
Thu Nov 1 11:28:46 UTC 2018


On Thu, Nov 1, 2018 at 12:42 AM Michal Hocko <mhocko at kernel.org> wrote:
> On Wed 31-10-18 07:40:14, Dave Hansen wrote:
> > Didn't we create the unevictable lists in the first place because
> > scanning alone was observed to be so expensive in some scenarios?
>
> Yes, that is the case. I might just misunderstood the code I thought
> those pages were already on the LRU when unevictable flag was set and
> we would only move these pages to the unevictable list lazy during the
> reclaim. If the flag is set at the time when the page is added to the
> LRU then it should get to the proper LRU list right away. But then I do
> not understand the test results from previous run at all.

"gem_syslatency -t 120 -b -m" allocates a lot of anon pages, it consists of
these looping threads:
  * ncpu threads to alloc i915 shmem buffers, these buffers are freed by i915
shrinker.
  * ncpu threads to mmap, write, munmap an 2 MiB mapping.
  * 1 thread to cat all files to /dev/null

Without the unevictable patch, after rebooting and running
"gem_syslatency -t 120 -b -m", I got these custom vmstat:
  pgsteal_kswapd_anon 29261
  pgsteal_kswapd_file 1153696
  pgsteal_direct_anon 255
  pgsteal_direct_file 13050
  pgscan_kswapd_anon 14524536
  pgscan_kswapd_file 1488683
  pgscan_direct_anon 1702448
  pgscan_direct_file 25849

And meminfo shows large anon lru size during test.
  # cat /proc/meminfo | grep -i "active("
  Active(anon):     377760 kB
  Inactive(anon):  3195392 kB
  Active(file):      19216 kB
  Inactive(file):    16044 kB

With this patch, the custom vmstat after test:
  pgsteal_kswapd_anon 74962
  pgsteal_kswapd_file 903588
  pgsteal_direct_anon 4434
  pgsteal_direct_file 14969
  pgscan_kswapd_anon 2814791
  pgscan_kswapd_file 1113676
  pgscan_direct_anon 526766
  pgscan_direct_file 32432

The anon pgscan count is reduced.


More information about the Intel-gfx mailing list