[PATCH v1 03/14] mm: add iomem vma selection for memory migration

Felix Kuehling felix.kuehling at amd.com
Wed Sep 1 15:40:43 UTC 2021


Am 2021-09-01 um 4:29 a.m. schrieb Christoph Hellwig:
> On Mon, Aug 30, 2021 at 01:04:43PM -0400, Felix Kuehling wrote:
>>>> driver code is not really involved in updating the CPU mappings. Maybe
>>>> it's something we need to do in the migration helpers.
>>> It looks like I'm totally misunderstanding what you are adding here
>>> then.  Why do we need any special treatment at all for memory that
>>> has normal struct pages and is part of the direct kernel map?
>> The pages are like normal memory for purposes of mapping them in CPU
>> page tables and for coherent access from the CPU.
> That's the user page tables.  What about the kernel direct map?
> If there is a normal kernel struct page backing there really should
> be no need for the pgmap.

I'm not sure. The physical address ranges are in the UEFI system address
map as special-purpose memory. Does Linux create the struct pages and
kernel direct map for that without a pgmap call? I didn't see that last
time I went digging through that code.


>
>> From an application
>> perspective, we want file-backed and anonymous mappings to be able to
>> use DEVICE_PUBLIC pages with coherent CPU access. The goal is to
>> optimize performance for GPU heavy workloads while minimizing the need
>> to migrate data back-and-forth between system memory and device memory.
> I don't really understand that part.  file backed pages are always
> allocated by the file system using the pagecache helpers, that is
> using the page allocator.  Anonymouns memory also always comes from
> the page allocator.

I'm coming at this from my experience with DEVICE_PRIVATE. Both
anonymous and file-backed pages should be migrateable to DEVICE_PRIVATE
memory by the migrate_vma_* helpers for more efficient access by our
GPU. (*) It's part of the basic premise of HMM as I understand it. I
would expect the same thing to work for DEVICE_PUBLIC memory.

(*) I believe migrating file-backed pages to DEVICE_PRIVATE doesn't
currently work, but that's something I'm hoping to fix at some point.


>
>> The pages are special in two ways:
>>
>>  1. The memory is managed not by the Linux buddy allocator, but by the
>>     GPU driver's TTM memory manager
> Why?

It's a system architectural decision based on the access latency to the
memory and the expected use cases that we do not want the GPU driver and
the Linux buddy allocator and VM subsystem competing for the same device
memory.


>
>>  2. We want to migrate data in response to GPU page faults and
>>     application hints using the migrate_vma helpers
> Why? 

Device memory has much higher bandwidth and much lower latency than
regular system memory for the GPU to access. It's essential for enabling
good GPU application performance. Page-based memory migration enables
good performance with more intuitive programming models such as
managed/unified memory in HIP or unified shared memory in OpenMP. We do
this on our discrete GPUs with DEVICE_PRIVATE memory.

I see DEVICE_PUBLIC as an improved version of DEVICE_PRIVATE that allows
the CPU to map the device memory coherently to minimize the need for
migrations when CPU and GPU access the same memory concurrently or
alternatingly. But we're not going as far as putting that memory
entirely under the management of the Linux memory manager and VM
subsystem. Our (and HPE's) system architects decided that this memory is
not suitable to be used like regular NUMA system memory by the Linux
memory manager.

Regards,
  Felix




More information about the dri-devel mailing list