[PATCH] drm/ttm: use dma_alloc_pages for the page pool

Christoph Hellwig hch at lst.de
Tue May 11 08:50:11 UTC 2021


On Tue, May 11, 2021 at 09:35:20AM +0200, Christian König wrote:
> We certainly going to need the drm_need_swiotlb() for userptr support 
> (unless we add some approach for drivers to opt out of swiotlb).

swiotlb use is driven by three things:

 1) addressing limitations of the device
 2) addressing limitations of the interconnect
 3) virtualiztion modes that require it

not sure how the driver could opt out.  What is the problem with userptr
support?

> Then while I really want to get rid of GFP_DMA32 as well I'm not 100% sure 
> if we can handle this without the flag.

Note that this is still using GFP_DMA32 underneath where required,
just in a layer that can decide that ѕensibly.

> And last we need something better to store the DMA address and order than 
> allocating a separate memory object for each page.

Yeah.  If you use __GFP_COMP for the allocations we can find the order
from the page itself, which might be useful.  For 64-bit platforms
the dma address could be store in page->private, or depending on how
the page gets used the dma_addr field in struct page that overloads
the lru field and is used by the networking page pool could be used.

Maybe we could even have a common page pool between net and drm, but
I don't want to go there myself, not being an expert on either subsystem.


More information about the amd-gfx mailing list