[PATCH 1/9] mm: Allow the [page|pfn]_mkwrite callbacks to drop the mmap_sem

Thomas Hellstrom thellstrom at vmware.com
Wed Apr 17 10:58:42 UTC 2019


Hi, Souptick,

On Sat, 2019-04-13 at 20:41 +0530, Souptick Joarder wrote:
> On Fri, Apr 12, 2019 at 9:34 PM Thomas Hellstrom <
> thellstrom at vmware.com> wrote:
> > Driver fault callbacks are allowed to drop the mmap_sem when
> > expecting
> > long hardware waits to avoid blocking other mm users. Allow the
> > mkwrite
> > callbacks to do the same by returning early on VM_FAULT_RETRY.
> > 
> > In particular we want to be able to drop the mmap_sem when waiting
> > for
> > a reservation object lock on a GPU buffer object. These locks may
> > be
> > held while waiting for the GPU.
> > 
> > Cc: Andrew Morton <akpm at linux-foundation.org>
> > Cc: Matthew Wilcox <willy at infradead.org>
> > Cc: Will Deacon <will.deacon at arm.com>
> > Cc: Peter Zijlstra <peterz at infradead.org>
> > Cc: Rik van Riel <riel at surriel.com>
> > Cc: Minchan Kim <minchan at kernel.org>
> > Cc: Michal Hocko <mhocko at suse.com>
> > Cc: Huang Ying <ying.huang at intel.com>
> > Cc: Souptick Joarder <jrdr.linux at gmail.com>
> > Cc: "Jérôme Glisse" <jglisse at redhat.com>
> > Cc: linux-mm at kvack.org
> > Cc: linux-kernel at vger.kernel.org
> > 
> > Signed-off-by: Thomas Hellstrom <thellstrom at vmware.com>
> > ---
> >  mm/memory.c | 10 ++++++----
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/memory.c b/mm/memory.c
> > index e11ca9dd823f..a95b4a3b1ae2 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2144,7 +2144,7 @@ static vm_fault_t do_page_mkwrite(struct
> > vm_fault *vmf)
> >         ret = vmf->vma->vm_ops->page_mkwrite(vmf);
> >         /* Restore original flags so that caller is not surprised
> > */
> >         vmf->flags = old_flags;
> > -       if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))
> > +       if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_RETRY |
> > VM_FAULT_NOPAGE)))
> 
> With this patch there will multiple instances of (VM_FAULT_ERROR |
> VM_FAULT_RETRY | VM_FAULT_NOPAGE)
> in mm/memory.c. Does it make sense to wrap it in a macro and use it ?

Even though the code will look neater, it might be trickier to follow a
particular error path. Could we perhaps postpone to a follow-up patch?

Thomas



> 
> >                 return ret;
> >         if (unlikely(!(ret & VM_FAULT_LOCKED))) {
> >                 lock_page(page);
> > @@ -2419,7 +2419,7 @@ static vm_fault_t wp_pfn_shared(struct
> > vm_fault *vmf)
> >                 pte_unmap_unlock(vmf->pte, vmf->ptl);
> >                 vmf->flags |= FAULT_FLAG_MKWRITE;
> >                 ret = vma->vm_ops->pfn_mkwrite(vmf);
> > -               if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))
> > +               if (ret & (VM_FAULT_ERROR | VM_FAULT_RETRY |
> > VM_FAULT_NOPAGE))
> >                         return ret;
> >                 return finish_mkwrite_fault(vmf);
> >         }
> > @@ -2440,7 +2440,8 @@ static vm_fault_t wp_page_shared(struct
> > vm_fault *vmf)
> >                 pte_unmap_unlock(vmf->pte, vmf->ptl);
> >                 tmp = do_page_mkwrite(vmf);
> >                 if (unlikely(!tmp || (tmp &
> > -                                     (VM_FAULT_ERROR |
> > VM_FAULT_NOPAGE)))) {
> > +                                     (VM_FAULT_ERROR |
> > VM_FAULT_RETRY |
> > +                                      VM_FAULT_NOPAGE)))) {
> >                         put_page(vmf->page);
> >                         return tmp;
> >                 }
> > @@ -3494,7 +3495,8 @@ static vm_fault_t do_shared_fault(struct
> > vm_fault *vmf)
> >                 unlock_page(vmf->page);
> >                 tmp = do_page_mkwrite(vmf);
> >                 if (unlikely(!tmp ||
> > -                               (tmp & (VM_FAULT_ERROR |
> > VM_FAULT_NOPAGE)))) {
> > +                               (tmp & (VM_FAULT_ERROR |
> > VM_FAULT_RETRY |
> > +                                       VM_FAULT_NOPAGE)))) {
> >                         put_page(vmf->page);
> >                         return tmp;
> >                 }
> > --
> > 2.20.1
> > 


More information about the dri-devel mailing list