[PATCH] TTM DMA pool v1.8
thellstrom at vmware.com
Thu Sep 29 23:59:52 PDT 2011
I'm really sorry for taking so long to review this.
I'd like to go through a couple of high-level things first before
reviewing the coding itself.
The page_alloc_func structure looks nice, but I'd like to have it per
we would just need to make sure that the backend is alive when we alloc
/ free pages.
The reason for this is that there may be backends that want to allocate
dma memory running simultaneously with those who don't. When the backend
fires up, it can determine whether to use DMA memory or not.
This also eliminates the need for patch 3/9. and is in line with patch 8/9.
2) Memory accounting: If the number DMA pages are limited in a way that
the ttm memory global routines are not aware of. How do we handle memory
accounting? (How do we avoid exhausting IOMMU space)?
3) Page swapping. Currently we just copy pages to shmem pages and then
free device pages. In the future we'd probably like to insert non-dma
pages directly into the swap cache. Is it possible to differentiate dma
pages from pages that are directly insertable?
On 09/29/2011 10:33 PM, Konrad Rzeszutek Wilk wrote:
> [.. and this is what I said in v1 post]:
> Way back in January this patchset:
> was merged in, but pieces of it had to be reverted b/c they did not
> work properly under PowerPC, ARM, and when swapping out pages to disk.
> After a bit of discussion on the mailing list
> http://marc.info/?i=4D769726.email@example.com I started working on it, but
> got waylaid by other things .. and finally I am able to post the RFC patches.
> There was a lot of discussion about it and I am not sure if I captured
> everybody's thoughts - if I did not - that is _not_ intentional - it has just
> been quite some time..
> Anyhow .. the patches explore what the "lib/dmapool.c" does - which is to have a
> DMA pool that the device has associated with. I kind of married that code
> along with drivers/gpu/drm/ttm/ttm_page_alloc.c to create a TTM DMA pool code.
> The end result is DMA pool with extra features: can do write-combine, uncached,
> writeback (and tracks them and sets back to WB when freed); tracks "cached"
> pages that don't really need to be returned to a pool; and hooks up to
> the shrinker code so that the pools can be shrunk.
> If you guys think this set of patches make sense - my future plans were
> 1) Get this in large crowd of testing .. and if it works for a kernel release
> 2) to move a bulk of this in the lib/dmapool.c (I spoke with Matthew Wilcox
> about it and he is OK as long as I don't introduce performance regressions).
> But before I do any of that a second set of eyes taking a look at these
> patches would be most welcome.
> In regards to testing, I've been running them non-stop for the last month
> (and found some issues which I've fixed up) - and been quite happy with how
> they work.
> Michel (thanks!) took a spin of the patches on his PowerPC and they did not
> cause any regressions (wheew).
> The patches are also located in a git tree:
> git://oss.oracle.com/git/kwilk/xen.git devel/ttm.dma_pool.v1.8
> drivers/gpu/drm/nouveau/nouveau_mem.c | 8 +-
> drivers/gpu/drm/nouveau/nouveau_sgdma.c | 3 +-
> drivers/gpu/drm/radeon/radeon_device.c | 6 +
> drivers/gpu/drm/radeon/radeon_gart.c | 4 +-
> drivers/gpu/drm/radeon/radeon_ttm.c | 3 +-
> drivers/gpu/drm/ttm/Makefile | 3 +
> drivers/gpu/drm/ttm/ttm_bo.c | 4 +-
> drivers/gpu/drm/ttm/ttm_memory.c | 5 +
> drivers/gpu/drm/ttm/ttm_page_alloc.c | 63 ++-
> drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 1317 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/ttm/ttm_tt.c | 5 +-
> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +-
> drivers/xen/swiotlb-xen.c | 2 +-
> include/drm/ttm/ttm_bo_driver.h | 7 +-
> include/drm/ttm/ttm_page_alloc.h | 100 +++-
> include/linux/swiotlb.h | 7 +-
> lib/swiotlb.c | 5 +-
> 17 files changed, 1516 insertions(+), 30 deletions(-)
More information about the dri-devel