[Intel-gfx] [PATCH v3] mm, drm/i915: mark pinned shmemfs pages as unevictable

Vovo Yang vovoy at chromium.org
Fri Nov 2 12:35:11 UTC 2018


On Thu, Nov 1, 2018 at 9:10 PM Michal Hocko <mhocko at kernel.org> wrote:
> OK, so that explain my question about the test case. Even though you
> generate a lot of page cache, the amount is still too small to trigger
> pagecache mostly reclaim and anon LRUs are scanned as well.
>
> Now to the difference with the previous version which simply set the
> UNEVICTABLE flag on mapping. Am I right assuming that pages are already
> at LRU at the time? Is there any reason the mapping cannot have the flag
> set before they are added to the LRU?

I checked again. When I run gem_syslatency, it sets unevictable flag
first and then adds pages to LRU, so my explanation to the previous
test result is wrong. It should not be necessary to explicitly move
these pages to unevictable list for this test case. The performance
improvement of this patch on kbl might be due to not calling
shmem_unlock_mapping.

The perf result of a shmem lock test shows find_get_entries is the
most expensive part of shmem_unlock_mapping.
85.32%--ksys_shmctl
        shmctl_do_lock
         --85.29%--shmem_unlock_mapping
                   |--45.98%--find_get_entries
                   |           --10.16%--radix_tree_next_chunk
                   |--16.78%--check_move_unevictable_pages
                   |--16.07%--__pagevec_release
                   |           --15.67%--release_pages
                   |                      --4.82%--free_unref_page_list
                   |--4.38%--pagevec_remove_exceptionals
                    --0.59%--_cond_resched


More information about the Intel-gfx mailing list