[RFC][PATCH v6 1/7] drm: Add a sharable drm page-pool implementation

Christian König christian.koenig at amd.com
Tue Feb 9 12:11:13 UTC 2021



Am 05.02.21 um 21:46 schrieb John Stultz:
> On Fri, Feb 5, 2021 at 12:47 AM Christian König
> <christian.koenig at amd.com> wrote:
>> Am 05.02.21 um 09:06 schrieb John Stultz:
>>> diff --git a/drivers/gpu/drm/page_pool.c b/drivers/gpu/drm/page_pool.c
>>> new file mode 100644
>>> index 000000000000..2139f86e6ca7
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/page_pool.c
>>> @@ -0,0 +1,220 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>> Please use a BSD/MIT compatible license if you want to copy this from
>> the TTM code.
> Hrm. This may be problematic, as it's not just TTM code, but some of
> the TTM logic integrated into a page-pool implementation I wrote based
> on logic from the ION code (which was GPL-2.0 before it was dropped).
> So I don't think I can just make it MIT.  Any extra context on the
> need for MIT, or suggestions on how to best resolve this?

Most of the DRM/TTM code is also license able under the BSD/MIT style 
license since we want to enable the BSD guys to port it over as well.

What stuff do you need from the ION code that you can't just code 
yourself? As far as I have seen this is like 99% code from the TTM pool.

Christian.

>
>>> +int drm_page_pool_get_size(struct drm_page_pool *pool)
>>> +{
>>> +     int ret;
>>> +
>>> +     spin_lock(&pool->lock);
>>> +     ret = pool->count;
>>> +     spin_unlock(&pool->lock);
>> Maybe use an atomic for the count instead?
>>
> I can do that, but am curious as to the benefit? We are mostly using
> count where we already have to take the pool->lock anyway, and this
> drm_page_pool_get_size() is only used for debugfs output so far, so I
> don't expect it to be a hot path.

Yeah, it's not really important. IIRC I even walked over the linked list 
to count how many pages we had.

Christian.

>
>
>>> +void drm_page_pool_add(struct drm_page_pool *pool, struct page *page)
>>> +{
>>> +     spin_lock(&pool->lock);
>>> +     list_add_tail(&page->lru, &pool->items);
>>> +     pool->count++;
>>> +     atomic_long_add(1 << pool->order, &total_pages);
>>> +     spin_unlock(&pool->lock);
>>> +
>>> +     mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
>>> +                         1 << pool->order);
>> Hui what? What should that be good for?
> This is a carryover from the ION page pool implementation:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fstaging%2Fandroid%2Fion%2Fion_page_pool.c%3Fh%3Dv5.10%23n28&data=04%7C01%7Cchristian.koenig%40amd.com%7Cc4eadb0a9cf6491d99ba08d8ca173457%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637481548325174885%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=FUjZK5NSDMUYfU7vGeE4fDU2HCF%2FYyNBwc30aoLLPQ4%3D&reserved=0
>
> My sense is it helps with the vmstat/meminfo accounting so folks can
> see the cached pages are shrinkable/freeable. This maybe falls under
> other dmabuf accounting/stats discussions, so I'm happy to remove it
> for now, or let the drivers using the shared page pool logic handle
> the accounting themselves?
>
>
>>> +static struct page *drm_page_pool_remove(struct drm_page_pool *pool)
>>> +{
>>> +     struct page *page;
>>> +
>>> +     if (!pool->count)
>>> +             return NULL;
>> Better use list_first_entry_or_null instead of checking the count.
>>
>> This way you can also pull the lock into the function.
> Yea, that cleans a number of things up nicely. Thank you!
>
>
>>> +struct drm_page_pool *drm_page_pool_create(unsigned int order,
>>> +                                        int (*free_page)(struct page *p, unsigned int order))
>>> +{
>>> +     struct drm_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL);
>> Why not making this an embedded object? We should not see much dynamic
>> pool creation.
> Yea, it felt cleaner at the time this way, but I think I will need to
> switch to an embedded object in order to resolve the memory usage
> issue you pointed out with growing the ttm_pool_dma, so thank you for
> the suggestion!
>
>
>>> +void drm_page_pool_destroy(struct drm_page_pool *pool)
>>> +{
>>> +     struct page *page;
>>> +
>>> +     /* Remove us from the pool list */
>>> +     mutex_lock(&pool_list_lock);
>>> +     list_del(&pool->list);
>>> +     mutex_unlock(&pool_list_lock);
>>> +
>>> +     /* Free any remaining pages in the pool */
>>> +     spin_lock(&pool->lock);
>> Locking should be unnecessary when the pool is destroyed anyway.
> I guess if we've already pruned ourself from the pool list, then your
> right, we can't race with the shrinker and it's maybe not necessary.
> But it also seems easier to consistently follow the locking rules in a
> very unlikely path rather than leaning on subtlety.  Either way, I
> think this becomes moot if I make the improvements you suggest to
> drm_page_pool_remove().
>
>>> +static int drm_page_pool_shrink_one(void)
>>> +{
>>> +     struct drm_page_pool *pool;
>>> +     struct page *page;
>>> +     int nr_freed = 0;
>>> +
>>> +     mutex_lock(&pool_list_lock);
>>> +     pool = list_first_entry(&pool_list, typeof(*pool), list);
>>> +
>>> +     spin_lock(&pool->lock);
>>> +     page = drm_page_pool_remove(pool);
>>> +     spin_unlock(&pool->lock);
>>> +
>>> +     if (page)
>>> +             nr_freed = drm_page_pool_free_pages(pool, page);
>>> +
>>> +     list_move_tail(&pool->list, &pool_list);
>> Better to move this up, directly after the list_first_entry().
> Sounds good!
>
> Thanks so much for your review and feedback! I'll try to get some of
> the easy suggestions integrated, and will have to figure out what to
> do about the re-licensing request.
>
> thanks
> -john



More information about the dri-devel mailing list