[Intel-gfx] [PATCH i-g-t 1/3] i915/gem_userptr_blits: Only mlock the memfd once, not the arena

Chris Wilson chris at chris-wilson.co.uk
Wed Jan 16 10:46:16 UTC 2019


Quoting Mika Kuoppala (2019-01-16 10:35:59)
> Chris Wilson <chris at chris-wilson.co.uk> writes:
> 
> > Quoting Mika Kuoppala (2019-01-16 09:47:27)
> >> Chris Wilson <chris at chris-wilson.co.uk> writes:
> >> 
> >> > We multiply the memfd 64k to create a 2G arena which we then attempt to
> >> > write into after marking read-only. Howver, when it comes to unlock the
> >> 
> >> s/Howver/However
> >> 
> >> > arena after the test, performance tanks as the kernel tries to resolve
> >> > the 64k repeated mappings onto the same set of pages. (Must not be a
> >> > very common operation!) We can get away with just mlocking the backing
> >> > store to prevent its eviction, which should prevent the arena mapping
> >> > from being freed as well.
> >> 
> >> hmm should. How are they bound?
> >
> > All I'm worried about are the allocs for the pud/pmd etc, which aiui are
> > not freed until the pte are removed and the pte shouldn't be reaped
> > because the struct page are locked. However, I haven't actually verified
> > that mlocking the underlying pages is enough to be sure that the page
> > tables of the various mappings are safe from eviction. On the other
> > hand, munlock_vma_range doesn't scale to the abuse we put it to, and
> > that is causing issues for CI!
> 
> If we can dodge it with this, great.

To be fair, it is just an optimisation to make sure we can use the whole
arena (checking against available address space) and that it won't be
change (and be refaulted, so the results should be transparent although
expensive) under testing. I can't recall any other reason for sticking
mlock in there.
-Chris


More information about the Intel-gfx mailing list