[PATCH v8 5/8] mm: Device exclusive memory access

Alistair Popple apopple at nvidia.com
Wed May 19 12:46:14 UTC 2021


On Wednesday, 19 May 2021 10:24:27 PM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> On Wed, May 19, 2021 at 08:49:01PM +1000, Alistair Popple wrote:
> > On Wednesday, 19 May 2021 7:16:38 AM AEST Peter Xu wrote:
> > > External email: Use caution opening links or attachments
> > > 
> > > 
> > > On Wed, Apr 07, 2021 at 06:42:35PM +1000, Alistair Popple wrote:
> > > 
> > > [...]
> > > 
> > > > +static bool try_to_protect(struct page *page, struct mm_struct *mm,
> > > > +                        unsigned long address, void *arg)
> > > > +{
> > > > +     struct ttp_args ttp = {
> > > > +             .mm = mm,
> > > > +             .address = address,
> > > > +             .arg = arg,
> > > > +             .valid = false,
> > > > +     };
> > > > +     struct rmap_walk_control rwc = {
> > > > +             .rmap_one = try_to_protect_one,
> > > > +             .done = page_not_mapped,
> > > > +             .anon_lock = page_lock_anon_vma_read,
> > > > +             .arg = &ttp,
> > > > +     };
> > > > +
> > > > +     /*
> > > > +      * Restrict to anonymous pages for now to avoid potential
> > > > writeback
> > > > +      * issues.
> > > > +      */
> > > > +     if (!PageAnon(page))
> > > > +             return false;
> > > > +
> > > > +     /*
> > > > +      * During exec, a temporary VMA is setup and later moved.
> > > > +      * The VMA is moved under the anon_vma lock but not the
> > > > +      * page tables leading to a race where migration cannot
> > > > +      * find the migration ptes. Rather than increasing the
> > > > +      * locking requirements of exec(), migration skips
> > > > +      * temporary VMAs until after exec() completes.
> > > > +      */
> > > > +     if (!PageKsm(page) && PageAnon(page))
> > > > +             rwc.invalid_vma = invalid_migration_vma;
> > > > +
> > > > +     rmap_walk(page, &rwc);
> > > > +
> > > > +     return ttp.valid && !page_mapcount(page);
> > > > +}
> > > 
> > > I raised a question in the other thread regarding fork():
> > > 
> > > https://lore.kernel.org/lkml/YKQjmtMo+YQGx%2FwZ@t490s/
> > > 
> > > While I suddenly noticed that we may have similar issues even if we
> > > fork()
> > > before creating the ptes.
> > > 
> > > In that case, we may see multiple read-only ptes pointing to the same
> > > page.
> > > We will convert all of them into device exclusive read ptes in
> > > rmap_walk()
> > > above, however how do we guarantee after all COW done in the parent and
> > > all
> > > the childs processes, the device owned page will be returned to the
> > > parent?
> > 
> > I assume you are talking about a fork() followed by a call to
> > make_device_exclusive()? I think this should be ok because
> > make_device_exclusive() always calls GUP with FOLL_WRITE both to break COW
> > and because a device performing atomic operations needs to write to the
> > page. I suppose a comment here highlighting the need to break COW to
> > avoid this scenario would be useful though.
> 
> Indeed, sorry for the false alarm!  Yes it would be great to mention that
> too.

No problem! Thanks for the comments.

> --
> Peter Xu






More information about the dri-devel mailing list