[RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

Jason Gunthorpe jgg at nvidia.com
Thu Aug 3 12:14:59 UTC 2023


On Thu, Aug 03, 2023 at 07:35:51AM +0000, Kasireddy, Vivek wrote:
> Hi Jason,
> 
> > > > Right, the "the zero pages are changed into writable pages" in your
> > > > above comment just might not apply, because there won't be any page
> > > > replacement (hopefully :) ).
> > 
> > > If the page replacement does not happen when there are new writes to the
> > > area where the hole previously existed, then would we still get an
> > invalidate
> > > when this happens? Is there any other way to get notified when the zeroed
> > > page is written to if the invalidate does not get triggered?
> > 
> > What David is saying is that memfd does not use the zero page
> > optimization for hole punches. Any access to the memory, including
> > read-only access through hmm_range_fault() will allocate unique
> > pages. Since there is no zero page and no zero-page replacement there
> > is no issue with invalidations.

> It looks like even with hmm_range_fault(), the invalidate does not get
> triggered when the hole is refilled with new pages because of writes.
> This is probably because hmm_range_fault() does not fault in any pages
> that get invalidated later when writes occur.

hmm_range_fault() returns the current content of the VMAs, or it
faults. If it returns pages then it came from one of these two places.

If your VMA is incoherent with what you are doing then you have bigger
problems, or maybe you found a bug.

> The above log messages are seen immediately after the hole is punched. As
> you can see, hmm_range_fault() returns the pfns of old pages and not zero
> pages. And, I see the below messages (with patch #2 in this series applied)
> as the hole is refilled after writes:

I don't know what you are doing, but it is something wrong or you've
found a bug in the memfds.

Jason


More information about the dri-devel mailing list