[PATCH v6 3/5] mm/gup: Introduce memfd_pin_user_pages() for pinning memfd pages (v6)

Jason Gunthorpe jgg at nvidia.com
Thu Dec 7 13:05:32 UTC 2023


On Thu, Dec 07, 2023 at 10:44:14AM +0100, David Hildenbrand wrote:

> > > If you always want to return folios, then better name it
> > > "memfd_pin_user_folios" (or just "memfd_pin_folios") and pass in a range
> > > (instead of a nr_pages parameter), and somehow indicate to the caller
> > > how many folio were in that range, and if that range was fully covered.
> > I think it makes sense to return folios from this interface; and considering my
> > use-case, I'd like have this API return an error if it cannot pin (or allocate)
> > the exact number of folios the caller requested.
> 
> Okay, then better use folios.
> 
> Assuming a caller puts in "start = X" and gets some large folio back. How is
> the caller supposed to know at which offset to look into that folio (IOW<
> which subpage)? For "pages" it was obvious (you get the actual subpages),
> but as soon as we return a large folio, some information is missing for the
> caller.
> 
> How can the caller figure that out?

This can only work if the memfd is required to only have full folios
at aligned locations. Under that restriction computing the first folio
offset is easy enough:

  folio offset = (start % folio size)

But is that true for the memfds here?

> > I can make the udmabuf driver use folios instead of pages too but the function
> > check_and_migrate_movable_pages() in GUP still takes a list of pages. Do you
> > think it is ok to use a local variable to collect all the head pages for this?
> 
> I think you can simply pass in the head page, because only whole folios can
> be converted. At some point we should convert that one to use folios as
> well.

It is like that because it processes the output from GUP in-place
which is a page list..

Probably what we need to do is make the migration checks happen while
accumulating the pages so we don't need to scan the output list..

Jason


More information about the dri-devel mailing list