[PATCH 06/13] drm/etnaviv: Use sychronized interface of the IOMMU-API

Joerg Roedel joro at 8bytes.org
Thu Aug 17 12:56:29 UTC 2017


From: Joerg Roedel <jroedel at suse.de>

The map and unmap functions of the IOMMU-API changed their
semantics: They do no longer guarantee that the hardware
TLBs are synchronized with the page-table updates they made.

To make conversion easier, new synchronized functions have
been introduced which give these guarantees again until the
code is converted to use the new TLB-flush interface of the
IOMMU-API, which allows certain optimizations.

But for now, just convert this code to use the synchronized
functions so that it will behave as before.

Cc: Lucas Stach <l.stach at pengutronix.de>
Cc: Russell King <linux+etnaviv at armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner at gmail.com>
Cc: David Airlie <airlied at linux.ie>
Cc: etnaviv at lists.freedesktop.org
Cc: dri-devel at lists.freedesktop.org
Signed-off-by: Joerg Roedel <jroedel at suse.de>
---
 drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index f103e78..ae0247c 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -47,7 +47,7 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova,
 
 		VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes);
 
-		ret = iommu_map(domain, da, pa, bytes, prot);
+		ret = iommu_map_sync(domain, da, pa, bytes, prot);
 		if (ret)
 			goto fail;
 
@@ -62,7 +62,7 @@ int etnaviv_iommu_map(struct etnaviv_iommu *iommu, u32 iova,
 	for_each_sg(sgt->sgl, sg, i, j) {
 		size_t bytes = sg_dma_len(sg) + sg->offset;
 
-		iommu_unmap(domain, da, bytes);
+		iommu_unmap_sync(domain, da, bytes);
 		da += bytes;
 	}
 	return ret;
@@ -80,7 +80,7 @@ int etnaviv_iommu_unmap(struct etnaviv_iommu *iommu, u32 iova,
 		size_t bytes = sg_dma_len(sg) + sg->offset;
 		size_t unmapped;
 
-		unmapped = iommu_unmap(domain, da, bytes);
+		unmapped = iommu_unmap_sync(domain, da, bytes);
 		if (unmapped < bytes)
 			return unmapped;
 
@@ -338,7 +338,7 @@ int etnaviv_iommu_get_suballoc_va(struct etnaviv_gpu *gpu, dma_addr_t paddr,
 			mutex_unlock(&mmu->lock);
 			return ret;
 		}
-		ret = iommu_map(mmu->domain, vram_node->start, paddr, size,
+		ret = iommu_map_sync(mmu->domain, vram_node->start, paddr, size,
 				IOMMU_READ);
 		if (ret < 0) {
 			drm_mm_remove_node(vram_node);
@@ -362,7 +362,7 @@ void etnaviv_iommu_put_suballoc_va(struct etnaviv_gpu *gpu,
 
 	if (mmu->version == ETNAVIV_IOMMU_V2) {
 		mutex_lock(&mmu->lock);
-		iommu_unmap(mmu->domain,iova, size);
+		iommu_unmap_sync(mmu->domain,iova, size);
 		drm_mm_remove_node(vram_node);
 		mutex_unlock(&mmu->lock);
 	}
-- 
2.7.4



More information about the etnaviv mailing list