[PATCH v7 3/6] mm: Introduce VM_LOCKONFAULT
Michal Hocko
mhocko at kernel.org
Tue Aug 25 07:29:15 PDT 2015
On Tue 25-08-15 15:55:46, Vlastimil Babka wrote:
> On 08/25/2015 03:41 PM, Michal Hocko wrote:
[...]
> >So what we have as a result is that partially populated ranges are
> >preserved and fully populated ones work in the best effort mode the same
> >way as they are now.
> >
> >Does that sound at least remotely reasonably?
>
> I'll basically repeat what I said earlier:
>
> - mremap scanning existing pte's to figure out the population would slow it
> down for no good reason
So do we really need to populate the enlarged range? All the man page is
saying is that the lock is maintained. Which will be still the case. It
is true that the failure is unlikely (unless you are running in the
memcg) but you cannot rely on the full mlock semantic so what would be a
problem?
> - it would be unreliable anyway:
> - example: was the area completely populated because MLOCK_ONFAULT was not
> used or because the process faulted it already
OK, I see this as being a problem. Especially if the buffer is increase
2*original_len
> - example: was the area not completely populated because MLOCK_ONFAULT was
> used, or because mmap(MAP_LOCKED) failed to populate it fully?
What would be the difference? Both are ONFAULT now.
> I think the first point is a pointless regression for workloads that use
> just plain mlock() and don't want the onfault semantics. Unless there's some
> shortcut? Does vma have a counter of how much is populated? (I don't think
> so?)
--
Michal Hocko
SUSE Labs
More information about the dri-devel
mailing list