[Nouveau] [PATCH] mm: Take a page reference when removing device exclusive entries
Matthew Wilcox
willy at infradead.org
Wed Mar 29 03:16:25 UTC 2023
On Tue, Mar 28, 2023 at 01:14:34PM +1100, Alistair Popple wrote:
> +++ b/mm/memory.c
> @@ -3623,8 +3623,19 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
> struct vm_area_struct *vma = vmf->vma;
> struct mmu_notifier_range range;
>
> - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
> + /*
> + * We need a page reference to lock the page because we don't
> + * hold the PTL so a racing thread can remove the
> + * device-exclusive entry and unmap the page. If the page is
> + * free the entry must have been removed already.
> + */
> + if (!get_page_unless_zero(vmf->page))
> + return 0;
>From a folio point of view: what the hell are you doing here? Tail
pages don't have individual refcounts; all the refcounts are actually
taken on the folio. So this should be:
if (!folio_try_get(folio))
return 0;
(you can fix up the comment yourself)
> + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
> + put_page(vmf->page);
folio_put(folio);
> return VM_FAULT_RETRY;
> + }
> mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma,
> vma->vm_mm, vmf->address & PAGE_MASK,
> (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
> @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> folio_unlock(folio);
> + put_page(vmf->page);
folio_put(folio)
There, I just saved you 3 calls to compound_head(), saving roughly 150
bytes of kernel text.
> mmu_notifier_invalidate_range_end(&range);
> return 0;
> --
> 2.39.2
>
>
More information about the Nouveau
mailing list