[PATCH v2 hmm 05/11] mm/hmm: Remove duplicate condition test before wait_event_timeout
Ralph Campbell
rcampbell at nvidia.com
Fri Jun 7 20:21:12 UTC 2019
On 6/7/19 12:13 PM, Jason Gunthorpe wrote:
> On Fri, Jun 07, 2019 at 12:01:45PM -0700, Ralph Campbell wrote:
>>
>> On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
>>> From: Jason Gunthorpe <jgg at mellanox.com>
>>>
>>> The wait_event_timeout macro already tests the condition as its first
>>> action, so there is no reason to open code another version of this, all
>>> that does is skip the might_sleep() debugging in common cases, which is
>>> not helpful.
>>>
>>> Further, based on prior patches, we can no simplify the required condition
>>> test:
>>> - If range is valid memory then so is range->hmm
>>> - If hmm_release() has run then range->valid is set to false
>>> at the same time as dead, so no reason to check both.
>>> - A valid hmm has a valid hmm->mm.
>>>
>>> Also, add the READ_ONCE for range->valid as there is no lock held here.
>>>
>>> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com>
>>> Reviewed-by: Jérôme Glisse <jglisse at redhat.com>
>>> include/linux/hmm.h | 12 ++----------
>>> 1 file changed, 2 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
>>> index 4ee3acabe5ed22..2ab35b40992b24 100644
>>> +++ b/include/linux/hmm.h
>>> @@ -218,17 +218,9 @@ static inline unsigned long hmm_range_page_size(const struct hmm_range *range)
>>> static inline bool hmm_range_wait_until_valid(struct hmm_range *range,
>>> unsigned long timeout)
>>> {
>>> - /* Check if mm is dead ? */
>>> - if (range->hmm == NULL || range->hmm->dead || range->hmm->mm == NULL) {
>>> - range->valid = false;
>>> - return false;
>>> - }
>>> - if (range->valid)
>>> - return true;
>>> - wait_event_timeout(range->hmm->wq, range->valid || range->hmm->dead,
>>> + wait_event_timeout(range->hmm->wq, range->valid,
>>> msecs_to_jiffies(timeout));
>>> - /* Return current valid status just in case we get lucky */
>>> - return range->valid;
>>> + return READ_ONCE(range->valid);
>>> }
>>> /*
>>>
>>
>> Since we are simplifying things, perhaps we should consider merging
>> hmm_range_wait_until_valid() info hmm_range_register() and
>> removing hmm_range_wait_until_valid() since the pattern
>> is to always call the two together.
>
> ? the hmm.rst shows the hmm_range_wait_until_valid being called in the
> (ret == -EAGAIN) path. It is confusing because it should really just
> have the again label moved up above hmm_range_wait_until_valid() as
> even if we get the driver lock it could still be a long wait for the
> colliding invalidation to clear.
>
> What I want to get to is a pattern like this:
>
> pagefault():
>
> hmm_range_register(&range);
> again:
> /* On the slow path, if we appear to be live locked then we get
> the write side of mmap_sem which will break the live lock,
> otherwise this gets the read lock */
> if (hmm_range_start_and_lock(&range))
> goto err;
>
> lockdep_assert_held(range->mm->mmap_sem);
>
> // Optional: Avoid useless expensive work
> if (hmm_range_needs_retry(&range))
> goto again;
> hmm_range_(touch vmas)
>
> take_lock(driver->update);
> if (hmm_range_end(&range) {
> release_lock(driver->update);
> goto again;
> }
> // Finish driver updates
> release_lock(driver->update);
>
> // Releases mmap_sem
> hmm_range_unregister_and_unlock(&range);
>
> What do you think?
>
> Is it clear?
>
> Jason
>
Are you talking about acquiring mmap_sem in hmm_range_start_and_lock()?
Usually, the fault code has to lock mmap_sem for read in order to
call find_vma() so it can set range.vma.
If HMM drops mmap_sem - which I don't think it should, just return an
error to tell the caller to drop mmap_sem and retry - the find_vma()
will need to be repeated as well.
I'm also not sure about acquiring the mmap_sem for write as way to
mitigate thrashing. It seems to me that if a device and a CPU are
both faulting on the same page, some sort of backoff delay is needed
to let one side or the other make some progress.
Thrashing mitigation and how migrate_vma() plays in this is a
deep topic for thought.
More information about the amd-gfx
mailing list