[PATCH v2 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages

Jason Gunthorpe jgg at nvidia.com
Fri Aug 1 16:52:59 UTC 2025


On Sun, Jul 20, 2025 at 11:59:10PM -0700, Christoph Hellwig wrote:
> > +	/*
> > +	 * Don't fault in device private pages owned by the caller,
> > +	 * just report the PFN.
> > +	 */
> > +	if (pgmap->owner == range->dev_private_owner) {
> > +		*hmm_pfn = swp_offset_pfn(entry);
> > +		goto found;
> 
> This is dangerous because it mixes actual DMAable alias PFNs with the
> device private fake PFNs.  Maybe your hardware / driver can handle
> it, but just leaking this out is not a good idea.

For better or worse that is how the hmm API works today.

Recall the result is an array of unsigned long with a pfn and flags:

enum hmm_pfn_flags {
	/* Output fields and flags */
	HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
	HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
	HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),

The only promise is that every pfn has a struct page behind it.

If the caller specifies dev_private_owner then it must also look into
the struct page of every returned pfn to see if it is device private
or not.

hmm_dma_map_pfn() already unconditionally calls pci_p2pdma_state()
which checks for P2P struct pages.

It does sound like a good improvement to return the type of the pfn
(normal, p2p, private) in the flags bits as well to optimize away
these extra struct page lookups.

But this is a different project..

Jason


More information about the Nouveau mailing list