[PATCH 1/2] drm/prime: Iterate SG DMA addresses separately

Christian König christian.koenig at amd.com
Wed Apr 11 18:26:31 UTC 2018


Am 11.04.2018 um 19:11 schrieb Robin Murphy:
> For dma_map_sg(), DMA API implementations are free to merge consecutive
> segments into a single DMA mapping if conditions are suitable, thus the
> resulting DMA addresses may be packed into fewer entries than
> ttm->sg->nents implies.
>
> drm_prime_sg_to_page_addr_arrays() does not account for this, meaning
> its callers either have to reject the 0 < count < nents case or risk
> getting bogus addresses back later. Fortunately this is relatively easy
> to deal with having to rejig structures to also store the mapped count,
> since the total DMA length should still be equal to the total buffer
> length. All we need is a separate scatterlist cursor to iterate the DMA
> addresses separately from the CPU addresses.

Mhm, I think I like Sinas approach better.

See the hardware actually needs the dma_address on a page by page basis.

Joining multiple consecutive pages into one entry is just additional 
overhead which we don't need.

Regards,
Christian.

>
> Signed-off-by: Robin Murphy <robin.murphy at arm.com>
> ---
>
> Off the back of Sinan's proposal for a workaround, I took a closer look
> and this jumped out - I have no hardware to test it, nor do I really
> know my way around this code, so I'm probably missing something, but at
> face value this seems like the only obvious problem, and worth fixing
> either way.
>
> These patches are based on drm-next, and compile-tested (for arm64) only.
>
> Robin.
>
>   drivers/gpu/drm/drm_prime.c | 14 +++++++++++---
>   1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 7856a9b3f8a8..db3dc8489afc 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -933,16 +933,18 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
>   				     dma_addr_t *addrs, int max_entries)
>   {
>   	unsigned count;
> -	struct scatterlist *sg;
> +	struct scatterlist *sg, *dma_sg;
>   	struct page *page;
> -	u32 len, index;
> +	u32 len, dma_len, index;
>   	dma_addr_t addr;
>   
>   	index = 0;
> +	dma_sg = sgt->sgl;
> +	dma_len = sg_dma_len(dma_sg);
> +	addr = sg_dma_address(dma_sg);
>   	for_each_sg(sgt->sgl, sg, sgt->nents, count) {
>   		len = sg->length;
>   		page = sg_page(sg);
> -		addr = sg_dma_address(sg);
>   
>   		while (len > 0) {
>   			if (WARN_ON(index >= max_entries))
> @@ -957,6 +959,12 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
>   			len -= PAGE_SIZE;
>   			index++;
>   		}
> +
> +		if (dma_len == 0) {
> +			dma_sg = sg_next(dma_sg);
> +			dma_len = sg_dma_len(dma_sg);
> +			addr = sg_dma_address(dma_sg);
> +		}
>   	}
>   	return 0;
>   }



More information about the amd-gfx mailing list