page pools, was Re: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation

Christian König christian.koenig at amd.com
Wed Jul 7 09:32:26 UTC 2021



Am 07.07.21 um 09:14 schrieb Christoph Hellwig:
> On Wed, Jul 07, 2021 at 09:10:26AM +0200, Christian K??nig wrote:
>> Well, the original code all this is based on already had the comment that
>> this really belong into the page allocator.
>>
>> The key point is traditionally only GPUs used uncached and write-combined
>> memory in such large quantities that having a pool for them makes sense.
>>
>> Because of this we had this separately to reduce the complexity in the page
>> allocator to handle another set of complexity of allocation types.
>>
>> For the upside, for those use cases it means huge performance improvements
>> for those drivers. See the numbers John provided in the cover letter.
>>
>> But essentially at least I would be totally fine moving this into the page
>> allocator, but moving it outside of TTM already helps with this goal. So
>> this patch set is certainly a step into the right direction.
> Unless I'm badly misreading the patch and this series there is nothing
> about cache attributes in this code.  It just allocates pages, zeroes
> them, eventually hands them out to a consumer and registers a shrinker
> for its freelist.
>
> If OTOH it actually dealt with cachability that should be documented
> in the commit log and probably also the naming of the implementation.

Mhm, good point. In this case all left is the clearing overhead from the 
allocation hot path to the free path and I'm not really sure why the 
core page allocator shouldn't do that as well.

Regards,
Christian.


More information about the dri-devel mailing list