[Freedreno] [PATCH 04/16] drm: msm: Flush the cache immediately after allocating pages

Jordan Crouse jcrouse at codeaurora.org
Fri Nov 4 22:44:45 UTC 2016


For reasons that are not entirely understood using dma_map_sg()
for nocache/write combine buffers doesn't always successfully flush
the cache after the memory is zeroed somewhere deep in the bowels
of the shmem code.  My working theory is that the cache flush on
the swiotlb bounce buffer address work isn't always flushing what
we need.

Instead of using dma_map_sg() directly kmap and flush each page
at allocate time.  We could use invalidate + clean or just invalidate
if we wanted to but on ARM64 using a flush is safer and not much
slower for what we are trying to do.

Hopefully someday I'll more clearly understand the relationship between
shmem  kmap, vmap and the swiotlb bounce buffer and we can be smarter
about when and how we invalidate the caches.

Signed-off-by: Jordan Crouse <jcrouse at codeaurora.org>
---
 drivers/gpu/drm/msm/msm_gem.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 85f3047..29f5a30 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -79,6 +79,7 @@ static struct page **get_pages(struct drm_gem_object *obj)
 		struct drm_device *dev = obj->dev;
 		struct page **p;
 		int npages = obj->size >> PAGE_SHIFT;
+		int i;
 
 		if (use_pages(obj))
 			p = drm_gem_get_pages(obj);
@@ -91,6 +92,13 @@ static struct page **get_pages(struct drm_gem_object *obj)
 			return p;
 		}
 
+		for (i = 0; i < npages; i++) {
+			void *addr = kmap_atomic(p[i]);
+
+			__dma_flush_range(addr, addr + PAGE_SIZE);
+			kunmap_atomic(addr);
+		}
+
 		msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
 		if (IS_ERR(msm_obj->sgt)) {
 			dev_err(dev->dev, "failed to allocate sgt\n");
@@ -98,13 +106,6 @@ static struct page **get_pages(struct drm_gem_object *obj)
 		}
 
 		msm_obj->pages = p;
-
-		/* For non-cached buffers, ensure the new pages are clean
-		 * because display controller, GPU, etc. are not coherent:
-		 */
-		if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
-			dma_map_sg(dev->dev, msm_obj->sgt->sgl,
-					msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
 	}
 
 	return msm_obj->pages;
@@ -115,12 +116,6 @@ static void put_pages(struct drm_gem_object *obj)
 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
 
 	if (msm_obj->pages) {
-		/* For non-cached buffers, ensure the new pages are clean
-		 * because display controller, GPU, etc. are not coherent:
-		 */
-		if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
-			dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
-					msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
 		sg_free_table(msm_obj->sgt);
 		kfree(msm_obj->sgt);
 
-- 
1.9.1



More information about the Freedreno mailing list