[PATCH] mm/memremap: Introduce pgmap_request_folio() using pgmap offsets

Alistair Popple apopple at nvidia.com
Mon Oct 24 01:44:34 UTC 2022


Felix Kuehling <felix.kuehling at amd.com> writes:

> On 2022-10-20 19:17, Dan Williams wrote:
>> Felix Kuehling wrote:
>>> Am 2022-10-20 um 17:56 schrieb Dan Williams:
>>>>
>>>> For now this only converts the callers to lookup the pgmap and generate
>>>> the pgmap offset, but it does not do the deeper cleanup of teaching
>>>> those call sites to generate those arguments without walking the page
>>>> metadata. For next steps it appears the DEVICE_PRIVATE implementations
>>>> could plumb the pgmap into the necessary callsites and switch to using
>>>> gen_pool_alloc() to track which offsets of a pgmap are allocated.

That's an interesting idea. I might take a look at converting hmm-tests
to do this (and probably by extension Nouveau as the allocator is
basically the same).

Feel free to also add:

Reviewed-by: Alistair Popple <apopple at nvidia.com>

For the memremap/nouveau/hmm-test parts.

>>> Wouldn't that duplicate whatever device memory allocator we already have
>>> in our driver? Couldn't I just take the memory allocation from our TTM
>>> allocator and make necessary pgmap_request_folio calls to allocate the
>>> corresponding pages from the pgmap?
>> I think you could, as long as the output from that allocator is a
>> pgmap_offset rather than a pfn.
>>
>>> Or does the pgmap allocation need a finer granularity than the device
>>> memory allocation?
>> I would say the pgmap *allocation* happens at memremap_pages() time.
>> pgmap_request_folio() is a request to put a pgmap page into service.
>>
>> So, yes, I think you can bring your own allocator for what offsets are
>> in/out of service in pgmap space.
>
> Thank you for the explanation. The patch is
>
> Acked-by: Felix Kuehling <Felix.Kuehling at amd.com>


More information about the dri-devel mailing list