[RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

Kasireddy, Vivek vivek.kasireddy at intel.com
Tue Jul 25 22:44:09 UTC 2023


Hi Jason,

> > >
> > > > > I'm not at all familiar with the udmabuf use case but that sounds
> > > > > brittle and effectively makes this notifier udmabuf specific right?
> > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics
> components
> > > > (such as Spice, Gstreamer, UI, etc) zero-copy access to Guest created
> > > > buffers. In other words, from a core mm standpoint, udmabuf just
> > > > collects a bunch of pages (associated with buffers) scattered inside
> > > > the memfd (Guest ram backed by shmem or hugetlbfs) and wraps
> > > > them in a dmabuf fd. And, since we provide zero-copy access, we
> > > > use DMA fences to ensure that the components on the Host and
> > > > Guest do not access the buffer simultaneously.
> > >
> > > So why do you need to track updates proactively like this?
> > As David noted in the earlier series, if Qemu punches a hole in its memfd
> > that goes through pages that are registered against a udmabuf fd, then
> > udmabuf needs to update its list with new pages when the hole gets
> > filled after (guest) writes. Otherwise, we'd run into the coherency
> > problem (between udmabuf and memfd) as demonstrated in the selftest
> > (patch #3 in this series).
> 
> Holes created in VMA are tracked by invalidation, you haven't
> explained why this needs to also see change.
Oh, the invalidation part is ok and does not need any changes. My concern
(and the reason for this new notifier patch) is only about the lack of a
notification when a PTE is updated because of a fault (new page). In other
words, if something like change_pte() would have been called after
handle_pte_fault() or hugetlb_fault(), then this patch would not be needed.

> 
> BTW it is very jarring to hear you talk about files when working with
> mmu notifiers. MMU notifiers do not track hole punches or memfds, they
> track VMAs and PTEs. Punching a hole in a mmapped memfd will
> invalidate the convering PTEs.
I figured describing the problem in terms of memfds or hole punches would
provide more context; but, ok, I'll refrain from mentioning memfds or holes
and limit the discussion of this patch to VMAs and PTEs. 

> 
> > > Trigger a move when the backing memory changes and re-acquire it with
> > AFAICS, without this patch or adding new change_pte calls, there is no way
> to
> > get notified when a new page is mapped into the backing memory of a
> memfd
> > (backed by shmem or hugetlbfs) which happens after a hole punch
> followed
> > by writes.
> 
> Yes, we have never wanted to do this because is it racy.
> 
> If you still need the memory mapped then you re-call hmm_range_fault
> and re-obtain it. hmm_range_fault will resolve all the races and you
> get new pages.
IIUC, for my udmabuf use-case, it looks like calling hmm_range_fault
immediately after an invalidate (range notification) would preemptively fault in
new pages before a write. The problem with that is if a read occurs on those
new pages, then the data is incorrect as a write may not have happened yet.
Ideally, what I am looking for is for getting new pages at the time of or after
a write; until then, it is ok to use the old pages given my use-case.

> 
> > We can definitely get notified when a hole is punched via the
> > invalidate notifiers though, but as I described earlier this is not very helpful
> > for the udmabuf use-case.
> 
> I still don't understand why, or what makes udmabuf so special
> compared to all the other places tracking VMA changes and using
> hmm_range_fault.
I think the difference comes down to whether we (udmabuf driver) want to
grab the new pages after getting notified about a PTE update because of a fault
triggered by a write vs proactively obtaining the new pages by triggering the
fault (since hmm_range_fault() seems to call handle_mm_fault()) before a
potential write.

Thanks,
Vivek

> 
> Jason


More information about the dri-devel mailing list