[PATCH v2 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages
Yonatan Maman
ymaman at nvidia.com
Tue Jul 22 05:42:30 UTC 2025
On 21/07/2025 9:59, Christoph Hellwig wrote:
> On Fri, Jul 18, 2025 at 02:51:08PM +0300, Yonatan Maman wrote:
>> From: Yonatan Maman <Ymaman at Nvidia.com>
>>
>> hmm_range_fault() by default triggered a page fault on device private
>> when HMM_PFN_REQ_FAULT flag was set. pages, migrating them to RAM. In some
>> cases, such as with RDMA devices, the migration overhead between the
>> device (e.g., GPU) and the CPU, and vice-versa, significantly degrades
>> performance. Thus, enabling Peer-to-Peer (P2P) DMA access for device
>> private page might be crucial for minimizing data transfer overhead.
>
> You don't enable DMA for device private pages. You allow discovering
> a DMAable alias for device private pages.
>
> Also absolutely nothing GPU specific here.
>
Ok, understood, I will change it (v3).
>> + /*
>> + * Don't fault in device private pages owned by the caller,
>> + * just report the PFN.
>> + */
>> + if (pgmap->owner == range->dev_private_owner) {
>> + *hmm_pfn = swp_offset_pfn(entry);
>> + goto found;
>
> This is dangerous because it mixes actual DMAable alias PFNs with the
> device private fake PFNs. Maybe your hardware / driver can handle
> it, but just leaking this out is not a good idea.
>
In the current implementation, regular pci_p2p pages are returned as-is
from hmm_range_fault() - for virtual address backed by pci_p2p page, it
will return the corresponding PFN.
That said, we can mark these via the hmm_pfn output flags so the caller
can handle them appropriately.
>> + hmm_handle_device_private(range, pfn_req_flags, entry, hmm_pfn))
>
> Overly long line here.
>
will be fixed (v3)
More information about the dri-devel
mailing list