[PATCH RFC 10/35] mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap()
Mika Penttilä
mpenttil at redhat.com
Fri Aug 22 04:09:17 UTC 2025
On 8/21/25 23:06, David Hildenbrand wrote:
> All pages were already initialized and set to PageReserved() with a
> refcount of 1 by MM init code.
Just to be sure, how is this working with MEMBLOCK_RSRV_NOINIT, where MM is supposed not to
initialize struct pages?
> In fact, by using __init_single_page(), we will be setting the refcount to
> 1 just to freeze it again immediately afterwards.
>
> So drop the __init_single_page() and use __ClearPageReserved() instead.
> Adjust the comments to highlight that we are dealing with an open-coded
> prep_compound_page() variant.
>
> Further, as we can now safely iterate over all pages in a folio, let's
> avoid the page-pfn dance and just iterate the pages directly.
>
> Note that the current code was likely problematic, but we never ran into
> it: prep_compound_tail() would have been called with an offset that might
> exceed a memory section, and prep_compound_tail() would have simply
> added that offset to the page pointer -- which would not have done the
> right thing on sparsemem without vmemmap.
>
> Signed-off-by: David Hildenbrand <david at redhat.com>
> ---
> mm/hugetlb.c | 21 ++++++++++-----------
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d12a9d5146af4..ae82a845b14ad 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3235,17 +3235,14 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio,
> unsigned long start_page_number,
> unsigned long end_page_number)
> {
> - enum zone_type zone = zone_idx(folio_zone(folio));
> - int nid = folio_nid(folio);
> - unsigned long head_pfn = folio_pfn(folio);
> - unsigned long pfn, end_pfn = head_pfn + end_page_number;
> + struct page *head_page = folio_page(folio, 0);
> + struct page *page = folio_page(folio, start_page_number);
> + unsigned long i;
> int ret;
>
> - for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) {
> - struct page *page = pfn_to_page(pfn);
> -
> - __init_single_page(page, pfn, zone, nid);
> - prep_compound_tail((struct page *)folio, pfn - head_pfn);
> + for (i = start_page_number; i < end_page_number; i++, page++) {
> + __ClearPageReserved(page);
> + prep_compound_tail(head_page, i);
> ret = page_ref_freeze(page, 1);
> VM_BUG_ON(!ret);
> }
> @@ -3257,12 +3254,14 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio,
> {
> int ret;
>
> - /* Prepare folio head */
> + /*
> + * This is an open-coded prep_compound_page() whereby we avoid
> + * walking pages twice by preparing+freezing them in the same go.
> + */
> __folio_clear_reserved(folio);
> __folio_set_head(folio);
> ret = folio_ref_freeze(folio, 1);
> VM_BUG_ON(!ret);
> - /* Initialize the necessary tail struct pages */
> hugetlb_folio_init_tail_vmemmap(folio, 1, nr_pages);
> prep_compound_head((struct page *)folio, huge_page_order(h));
> }
--Mika
More information about the Intel-gfx
mailing list