[RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

Kasireddy, Vivek vivek.kasireddy at intel.com
Fri Aug 4 21:53:24 UTC 2023


Hi David,

> >
> >>>>>>>> Right, the "the zero pages are changed into writable pages" in your
> >>>>>>>> above comment just might not apply, because there won't be any
> >> page
> >>>>>>>> replacement (hopefully :) ).
> >>>>>>
> >>>>>>> If the page replacement does not happen when there are new
> writes
> >> to the
> >>>>>>> area where the hole previously existed, then would we still get an
> >>>>>> invalidate
> >>>>>>> when this happens? Is there any other way to get notified when the
> >> zeroed
> >>>>>>> page is written to if the invalidate does not get triggered?
> >>>>>>
> >>>>>> What David is saying is that memfd does not use the zero page
> >>>>>> optimization for hole punches. Any access to the memory, including
> >>>>>> read-only access through hmm_range_fault() will allocate unique
> >>>>>> pages. Since there is no zero page and no zero-page replacement
> there
> >>>>>> is no issue with invalidations.
> >>>>
> >>>>> It looks like even with hmm_range_fault(), the invalidate does not get
> >>>>> triggered when the hole is refilled with new pages because of writes.
> >>>>> This is probably because hmm_range_fault() does not fault in any
> pages
> >>>>> that get invalidated later when writes occur.
> >>>> hmm_range_fault() returns the current content of the VMAs, or it
> >>>> faults. If it returns pages then it came from one of these two places.
> >>>> If your VMA is incoherent with what you are doing then you have
> >>>> bigger
> >>>> problems, or maybe you found a bug.
> >>
> >> Note it will only fault in pages if HMM_PFN_REQ_FAULT is specified. You
> >> are setting that however you aren't setting HMM_PFN_REQ_WRITE which
> is
> >> what would trigger a fault to bring in the new pages. Does setting that
> >> fix the issue you are seeing?
> > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the issue.
> > Although, I do not have THP enabled (or built-in), shmem does not evict
> > the pages after hole punch as noted in the comment in shmem_fallocate():
> >                  if ((u64)unmap_end > (u64)unmap_start)
> >                          unmap_mapping_range(mapping, unmap_start,
> >                                              1 + unmap_end - unmap_start, 0);
> >                  shmem_truncate_range(inode, offset, offset + len - 1);
> >                  /* No need to unmap again: hole-punching leaves COWed pages
> */
> >
> > As a result, the pfn is still valid and the pte is pte_present() and pte_write().
> > This is the reason why adding in HMM_PFN_REQ_WRITE does not help;
> 
> Just to understand your setup: you are definitely using a MAP_SHARED
> shmem mapping, and not accidentally a MAP_PRIVATE mapping?
In terms of setup, I am just running the udmabuf selftest (shmem-based)
introduced in patch #3 of this series:
https://lore.kernel.org/all/20230718082858.1570809-4-vivek.kasireddy@intel.com/

And, it indeed uses a MAP_SHARED mapping.

Thanks,
Vivek

> 
> --
> Cheers,
> 
> David / dhildenb



More information about the dri-devel mailing list