[PATCH 2/3] drm/etnaviv: fix dma configuration of the virtual device

Robin Murphy robin.murphy at arm.com
Thu Aug 26 15:00:15 UTC 2021


On 2021-08-26 13:10, Michael Walle wrote:
> The DMA configuration of the virtual device is inherited from the first
> actual etnaviv device. Unfortunately, this doesn't work with an IOMMU:
> 
> [    5.191008] Failed to set up IOMMU for device (null); retaining platform DMA ops
> 
> This is because there is no associated iommu_group with the device. The
> group is set in iommu_group_add_device() which is eventually called by
> device_add() via the platform bus:
>    device_add()
>      blocking_notifier_call_chain()
>        iommu_bus_notifier()
>          iommu_probe_device()
>            __iommu_probe_device()
>              iommu_group_get_for_dev()
>                iommu_group_add_device()
> 
> Move of_dma_configure() into the probe function, which is called after
> device_add(). Normally, the platform code will already call it itself
> if .of_node is set. Unfortunately, this isn't the case here.
> 
> Also move the dma mask assignemnts to probe() to keep all DMA related
> settings together.

I assume the driver must already keep track of the real GPU platform 
device in order to map registers, request interrupts, etc. correctly - 
can't it also correctly use that device for DMA API calls and avoid the 
need for these shenanigans altogether?

FYI, IOMMU configuration is really supposed to *only* run at 
add_device() time as above - the fact that it's currently hooked in to 
be retriggered by of_dma_configure() on DT platforms actually turns out 
to lead to various issues within the IOMMU API, and the plan to change 
that is slowly climbing up my to-do list.

Robin.

> Signed-off-by: Michael Walle <michael at walle.cc>
> ---
>   drivers/gpu/drm/etnaviv/etnaviv_drv.c | 24 +++++++++++++++---------
>   1 file changed, 15 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> index 2509b3e85709..ff6425f6ebad 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
> @@ -589,6 +589,7 @@ static int compare_str(struct device *dev, void *data)
>   static int etnaviv_pdev_probe(struct platform_device *pdev)
>   {
>   	struct device *dev = &pdev->dev;
> +	struct device_node *first_node = NULL;
>   	struct component_match *match = NULL;
>   
>   	if (!dev->platform_data) {
> @@ -598,6 +599,9 @@ static int etnaviv_pdev_probe(struct platform_device *pdev)
>   			if (!of_device_is_available(core_node))
>   				continue;
>   
> +			if (!first_node)
> +				first_node = core_node;
> +
>   			drm_of_component_match_add(&pdev->dev, &match,
>   						   compare_of, core_node);
>   		}
> @@ -609,6 +613,17 @@ static int etnaviv_pdev_probe(struct platform_device *pdev)
>   			component_match_add(dev, &match, compare_str, names[i]);
>   	}
>   
> +	pdev->dev.coherent_dma_mask = DMA_BIT_MASK(40);
> +	pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
> +
> +	/*
> +	 * Apply the same DMA configuration to the virtual etnaviv
> +	 * device as the GPU we found. This assumes that all Vivante
> +	 * GPUs in the system share the same DMA constraints.
> +	 */
> +	if (first_node)
> +		of_dma_configure(&pdev->dev, first_node, true);
> +
>   	return component_master_add_with_match(dev, &etnaviv_master_ops, match);
>   }
>   
> @@ -659,15 +674,6 @@ static int __init etnaviv_init(void)
>   			of_node_put(np);
>   			goto unregister_platform_driver;
>   		}
> -		pdev->dev.coherent_dma_mask = DMA_BIT_MASK(40);
> -		pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
> -
> -		/*
> -		 * Apply the same DMA configuration to the virtual etnaviv
> -		 * device as the GPU we found. This assumes that all Vivante
> -		 * GPUs in the system share the same DMA constraints.
> -		 */
> -		of_dma_configure(&pdev->dev, np, true);
>   
>   		ret = platform_device_add(pdev);
>   		if (ret) {
> 


More information about the dri-devel mailing list