[PATCH v2 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages

Alistair Popple apopple at nvidia.com
Wed Jul 23 04:10:01 UTC 2025


On Wed, Jul 23, 2025 at 12:51:42AM -0300, Jason Gunthorpe wrote:
> On Tue, Jul 22, 2025 at 10:49:10AM +1000, Alistair Popple wrote:
> > > So what is it?
> > 
> > IMHO a hack, because obviously we shouldn't require real physical addresses for
> > something the CPU can't actually address anyway and this causes real
> > problems
> 
> IMHO what DEVICE PRIVATE really boils down to is a way to have swap
> entries that point to some kind of opaque driver managed memory.
> 
> We have alot of assumptions all over about pfn/phys to page
> relationships so anything that has a struct page also has to come with
> a fake PFN today..

Hmm ... maybe. To get that PFN though we have to come from either a special
swap entry which we already have special cases for, or a struct page (which is
a device private page) which we mostly have to handle specially anyway. I'm not
sure there's too many places that can sensibly handle a fake PFN without somehow
already knowing it is device-private PFN.

> > (eg. it doesn't actually work on anything other than x86_64). There's no reason
> > the "PFN" we store in device-private entries couldn't instead just be an index
> > into some data structure holding pointers to the struct pages. So instead of
> > using pfn_to_page()/page_to_pfn() we would use device_private_index_to_page()
> > and page_to_device_private_index().
> 
> It could work, but any of the pfn conversions would have to be tracked
> down.. Could be troublesome.

I looked at this a while back and I'm reasonably optimistic that this is doable
because we already have to treat these specially everywhere anyway. The proof
will be writing the patches of course.

 - Alistair

> Jason


More information about the dri-devel mailing list