[Intel-gfx] [PATCH 1/5] drm/i915/userptr: Beware recursive lock_page()
Chris Wilson
chris at chris-wilson.co.uk
Wed Jul 17 14:06:54 UTC 2019
Quoting Tvrtko Ursulin (2019-07-17 14:46:15)
>
> On 17/07/2019 14:35, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2019-07-17 14:23:55)
> >>
> >> On 17/07/2019 14:17, Chris Wilson wrote:
> >>> Quoting Tvrtko Ursulin (2019-07-17 14:09:00)
> >>>>
> >>>> On 16/07/2019 16:37, Chris Wilson wrote:
> >>>>> Quoting Tvrtko Ursulin (2019-07-16 16:25:22)
> >>>>>>
> >>>>>> On 16/07/2019 13:49, Chris Wilson wrote:
> >>>>>>> Following a try_to_unmap() we may want to remove the userptr and so call
> >>>>>>> put_pages(). However, try_to_unmap() acquires the page lock and so we
> >>>>>>> must avoid recursively locking the pages ourselves -- which means that
> >>>>>>> we cannot safely acquire the lock around set_page_dirty(). Since we
> >>>>>>> can't be sure of the lock, we have to risk skip dirtying the page, or
> >>>>>>> else risk calling set_page_dirty() without a lock and so risk fs
> >>>>>>> corruption.
> >>>>>>
> >>>>>> So if trylock randomly fail we get data corruption in whatever data set
> >>>>>> application is working on, which is what the original patch was trying
> >>>>>> to avoid? Are we able to detect the backing store type so at least we
> >>>>>> don't risk skipping set_page_dirty with anonymous/shmemfs?
> >>>>>
> >>>>> page->mapping???
> >>>>
> >>>> Would page->mapping work? What is it telling us?
> >>>
> >>> It basically tells us if there is a fs around; anything that is the most
> >>> basic of malloc (even tmpfs/shmemfs has page->mapping).
> >>
> >> Normal malloc so anonymous pages? Or you meant everything _apart_ from
> >> the most basic malloc?
> >
> > Aye missed the not.
> >
> >>>>> We still have the issue that if there is a mapping we should be taking
> >>>>> the lock, and we may have both a mapping and be inside try_to_unmap().
> >>>>
> >>>> Is this a problem? On a path with mappings we trylock and so solve the
> >>>> set_dirty_locked and recursive deadlock issues, and with no mappings
> >>>> with always dirty the page and avoid data corruption.
> >>>
> >>> The problem as I see it is !page->mapping are likely an insignificant
> >>> minority of userptr; as I think even memfd are essentially shmemfs (or
> >>> hugetlbfs) and so have mappings.
> >>
> >> Better then nothing, no? If easy to do..
> >
> > Actually, I erring on the opposite side. Peeking at mm/ internals does
> > not bode confidence and feels indefensible. I'd much rather throw my
> > hands up and say "this is the best we can do with the API provided,
> > please tell us what we should have done." To which the answer is
> > probably to not have used gup in the first place :|
>
> """
> /*
> * set_page_dirty() is racy if the caller has no reference against
> * page->mapping->host, and if the page is unlocked. This is because another
> * CPU could truncate the page off the mapping and then free the mapping.
> *
> * Usually, the page _is_ locked, or the caller is a user-space process which
> * holds a reference on the inode by having an open file.
> *
> * In other cases, the page should be locked before running set_page_dirty().
> */
> int set_page_dirty_lock(struct page *page)
> """
>
> Could we hold a reference to page->mapping->host while having pages and then would be okay to call plain set_page_dirty?
We would then be hitting the warnings in ext4 for unlocked pages again.
Essentially the argument is whether or not that warn is valid, to which I
think requires inner knowledge of vfs + ext4. To hold a reference on the
host would require us tracking page->mapping (reasonable since we
already hooked into mmu and so will get an invalidate + fresh gup on
any changes), plus iterating over all to acquire the extra reference if
applicable -- and I have no idea what the side-effects of that would be.
Could well be positive side-effects. Just feels like wandering even
further off the beaten path without a map. Good news hmm is just around
the corner (which will probably prohibit this use-case) :|
-Chris
More information about the Intel-gfx
mailing list