[PATCH v2 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages
Jason Gunthorpe
jgg at ziepe.ca
Tue Aug 5 14:09:25 UTC 2025
On Mon, Aug 04, 2025 at 11:51:38AM +1000, Alistair Popple wrote:
> On Fri, Aug 01, 2025 at 01:57:49PM -0300, Jason Gunthorpe wrote:
> > On Fri, Aug 01, 2025 at 06:50:18PM +0200, David Hildenbrand wrote:
> > > On 01.08.25 18:40, Jason Gunthorpe wrote:
> > > > On Fri, Jul 25, 2025 at 10:31:25AM +1000, Alistair Popple wrote:
> > > >
> > > > > The only issue would be if there were generic code paths that somehow have a
> > > > > raw pfn obtained from neither a page-table walk or struct page. My assumption
> > > > > (yet to be proven/tested) is that these paths don't exist.
> > > >
> > > > hmm does it, it encodes the device private into a pfn and expects the
> > > > caller to do pfn to page.
>
> What callers need to do pfn to page when finding a device private pfn via
> hmm_range_fault()? GPU drivers don't, they tend just to use the pfn as an offset
> from the start of the pgmap to find whatever data structure they are using to
> track device memory allocations.
All drivers today must. You have no idea if the PFN returned is a
private or CPU page. The only way to know is to check the struct page
type, by looking inside the struct page.
> So other than adding a HMM_PFN flag to say this is really a device index I don't
> see too many issues here.
Christoph suggested exactly this, and it would solve the issue. Seems
quite easy too. Let's do it.
Jason
More information about the Nouveau
mailing list