[PATCH v1 03/14] mm: add iomem vma selection for memory migration
Felix Kuehling
felix.kuehling at amd.com
Mon Aug 30 17:04:43 UTC 2021
Am 2021-08-30 um 4:28 a.m. schrieb Christoph Hellwig:
> On Thu, Aug 26, 2021 at 06:27:31PM -0400, Felix Kuehling wrote:
>> I think we're missing something here. As far as I can tell, all the work
>> we did first with DEVICE_GENERIC and now DEVICE_PUBLIC always used
>> normal pages. Are we missing something in our driver code that would
>> make these PTEs special? I don't understand how that can be, because
>> driver code is not really involved in updating the CPU mappings. Maybe
>> it's something we need to do in the migration helpers.
> It looks like I'm totally misunderstanding what you are adding here
> then. Why do we need any special treatment at all for memory that
> has normal struct pages and is part of the direct kernel map?
The pages are like normal memory for purposes of mapping them in CPU
page tables and for coherent access from the CPU. From an application
perspective, we want file-backed and anonymous mappings to be able to
use DEVICE_PUBLIC pages with coherent CPU access. The goal is to
optimize performance for GPU heavy workloads while minimizing the need
to migrate data back-and-forth between system memory and device memory.
The pages are special in two ways:
1. The memory is managed not by the Linux buddy allocator, but by the
GPU driver's TTM memory manager
2. We want to migrate data in response to GPU page faults and
application hints using the migrate_vma helpers
It's the second part that we're really trying to address with this patch
series.
Regards,
Felix
More information about the dri-devel
mailing list