[PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap
Liam Howlett
liam.howlett at oracle.com
Fri Jun 4 20:49:39 UTC 2021
* Shakeel Butt <shakeelb at google.com> [210525 19:45]:
> On Tue, May 25, 2021 at 11:40 AM Liam Howlett <liam.howlett at oracle.com> wrote:
> >
> [...]
> > >
> > > +/*
> > > + * Walks the vma's mapping a page and mlocks the page if any locked vma's are
> > > + * found. Once one is found the page is locked and the scan can be terminated.
> > > + */
> >
> > Can you please add that this requires the mmap_sem() lock to the
> > comments?
> >
>
> Why does this require mmap_sem() lock? Also mmap_sem() lock of which mm_struct?
Doesn't the mlock_vma_page() require the mmap_sem() for reading? The
mm_struct in vma->vm_mm;
>From what I can see, at least the following paths have mmap_lock held
for writing:
munlock_vma_pages_range() from __do_munmap()
munlokc_vma_pages_range() from remap_file_pages()
>
> > > +static bool page_mlock_one(struct page *page, struct vm_area_struct *vma,
> > > + unsigned long address, void *unused)
> > > +{
> > > + struct page_vma_mapped_walk pvmw = {
> > > + .page = page,
> > > + .vma = vma,
> > > + .address = address,
> > > + };
> > > +
> > > + /* An un-locked vma doesn't have any pages to lock, continue the scan */
> > > + if (!(vma->vm_flags & VM_LOCKED))
> > > + return true;
> > > +
> > > + while (page_vma_mapped_walk(&pvmw)) {
> > > + /* PTE-mapped THP are never mlocked */
> > > + if (!PageTransCompound(page))
> > > + mlock_vma_page(page);
> > > + page_vma_mapped_walk_done(&pvmw);
> > > +
> > > + /*
> > > + * no need to continue scanning other vma's if the page has
> > > + * been locked.
> > > + */
> > > + return false;
> > > + }
> > > +
> > > + return true;
> > > +}
munlock_vma_pages_range() comments still references try_to_{munlock|unmap}
More information about the dri-devel
mailing list