[PATCH 1/2] mm/vmscan: add sync_shrinkers function v2
Christian König
ckoenig.leichtzumerken at gmail.com
Thu Aug 26 14:58:06 UTC 2021
Am 26.08.21 um 15:28 schrieb Daniel Vetter:
> On Thu, Aug 26, 2021 at 03:27:30PM +0200, Daniel Vetter wrote:
>> On Fri, Aug 20, 2021 at 02:05:27PM +0200, Christian König wrote:
>>> From: Christian König <ckoenig.leichtzumerken at gmail.com>
>>>
>>> While unplugging a device the TTM shrinker implementation
>>> needs a barrier to make sure that all concurrent shrink
>>> operations are done and no other CPU is referring to a
>>> device specific pool any more.
>>>
>>> Taking and releasing the shrinker semaphore on the write
>>> side after unmapping and freeing all pages from the device
>>> pool should make sure that no shrinker is running in
>>> paralell.
>>>
>>> This allows us to avoid the contented mutex in the TTM pool
>>> implementation for every alloc/free operation.
>>>
>>> v2: rework the commit message to make clear why we need this
>>>
>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>> Acked-by: Huang Rui <ray.huang at amd.com>
>>> Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
>>> ---
>>> include/linux/shrinker.h | 1 +
>>> mm/vmscan.c | 10 ++++++++++
>>> 2 files changed, 11 insertions(+)
>>>
>>> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
>>> index 9814fff58a69..1de17f53cdbc 100644
>>> --- a/include/linux/shrinker.h
>>> +++ b/include/linux/shrinker.h
>>> @@ -93,4 +93,5 @@ extern void register_shrinker_prepared(struct shrinker *shrinker);
>>> extern int register_shrinker(struct shrinker *shrinker);
>>> extern void unregister_shrinker(struct shrinker *shrinker);
>>> extern void free_prealloced_shrinker(struct shrinker *shrinker);
>>> +extern void sync_shrinkers(void);
>>> #endif
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index 4620df62f0ff..fde1aabcfa7f 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -638,6 +638,16 @@ void unregister_shrinker(struct shrinker *shrinker)
>>> }
>>> EXPORT_SYMBOL(unregister_shrinker);
>>>
>>> +/**
>>> + * sync_shrinker - Wait for all running shrinkers to complete.
>> I think it would be good to add a bit more text here maybe:
>>
>> "This is equivalent to calling unregister_shrink() and
>> register_shrinker(), but atomically and with less overhead. This is useful
>> to guarantee that all shrinker invocations have seen an update, before
>> freeing memory, similar to rcu."
>>
>> Also a bit a bikeshed, but if we look at the equivalent in irq land it's
>> called synchronize_irq() and synchronize_hardirq(). I think it'd be good
>> to bikeshed that for more conceptual consistency.
> Oh also synchronize_*rcu* also spells them all out, so even more reasons
> to do the same.
I will just go with the explanation above.
The synchronize_rcu() explanation is so extensive that most people will
probably stop reading after the first paragraph.
Thanks,
Christian.
> -Daniel
>
>>> + */
>>> +void sync_shrinkers(void)
>>> +{
>>> + down_write(&shrinker_rwsem);
>>> + up_write(&shrinker_rwsem);
>>> +}
>>> +EXPORT_SYMBOL(sync_shrinkers);
>>> +
>>> #define SHRINK_BATCH 128
>>>
>>> static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>>> --
>>> 2.25.1
>>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
More information about the dri-devel
mailing list