[RFC PATCH 0/5] Support DEVICE_GENERIC memory in migrate_vma_*

Felix Kuehling felix.kuehling at amd.com
Fri May 28 15:56:36 UTC 2021


Am 2021-05-28 um 9:08 a.m. schrieb Jason Gunthorpe:
> On Thu, May 27, 2021 at 07:08:04PM -0400, Felix Kuehling wrote:
>> Now we're trying to migrate data to and from that memory using the
>> migrate_vma_* helpers so we can support page-based migration in our
>> unified memory allocations, while also supporting CPU access to those
>> pages.
> So you have completely coherent and indistinguishable GPU and CPU
> memory and the need of migration is basicaly alot like NUMA policy
> choice - get better access locality?

Yes. For a typical GPU compute application it means the GPU gets the
best bandwidth/latency, and the CPU can coherently access the results
without page faults and migrations. That's especially valuable for
applications with persistent compute kernels that want to exploit
concurrency between CPU and GPU.


>  
>> This patch series makes a few changes to make MEMORY_DEVICE_GENERIC pages
>> behave correctly in the migrate_vma_* helpers. We are looking for feedback
>> about this approach. If we're close, what's needed to make our patches
>> acceptable upstream? If we're not close, any suggestions how else to
>> achieve what we are trying to do (i.e. page migration and coherent CPU
>> access to VRAM)?
> I'm not an expert in migrate, but it doesn't look outrageous.
>
> Have you thought about allowing MEMORY_DEVICE_GENERIC to work with
> hmm_range_fault() so you can have nice uniform RDMA?

Yes. That's our plan for RDMA to unified memory on this system. My
understanding was, that DEVICE_GENERIC pages should already work with
hmm_range_fault. But maybe I'm missing something.


>
> People have wanted to do that with MEMORY_DEVICE_PRIVATE but nobody
> finished the work

Yeah, for DEVICE_PRIVATE it seems more tricky because the peer device is
not the owner of the pages and would need help from the actual owner to
get proper DMA addresses.

Regards,
  Felix


>
> Jason


More information about the dri-devel mailing list