[PATCH] iommu/amd: flush IOTLB for specific domains only (v3)

Joerg Roedel joro at 8bytes.org
Mon May 29 14:38:31 UTC 2017


Hi Arindam,

I met Tom Lendacky last week in Nuremberg last week and he told me he is
working on the same area of the code that this patch is for. His reason
for touching this code was to solve some locking problems. Maybe you two
can work together on a joint approach to improve this?

On Mon, May 22, 2017 at 01:18:01PM +0530, arindam.nath at amd.com wrote:
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 63cacf5..1edeebec 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -2227,15 +2227,26 @@ static struct iommu_group *amd_iommu_device_group(struct device *dev)
>  
>  static void __queue_flush(struct flush_queue *queue)
>  {
> -	struct protection_domain *domain;
> -	unsigned long flags;
>  	int idx;
>  
> -	/* First flush TLB of all known domains */
> -	spin_lock_irqsave(&amd_iommu_pd_lock, flags);
> -	list_for_each_entry(domain, &amd_iommu_pd_list, list)
> -		domain_flush_tlb(domain);
> -	spin_unlock_irqrestore(&amd_iommu_pd_lock, flags);
> +	/* First flush TLB of all domains which were added to flush queue */
> +	for (idx = 0; idx < queue->next; ++idx) {
> +		struct flush_queue_entry *entry;
> +
> +		entry = queue->entries + idx;
> +
> +		/*
> +		 * There might be cases where multiple IOVA entries for the
> +		 * same domain are queued in the flush queue. To avoid
> +		 * flushing the same domain again, we check whether the
> +		 * flag is set or not. This improves the efficiency of
> +		 * flush operation.
> +		 */
> +		if (!entry->dma_dom->domain.already_flushed) {
> +			entry->dma_dom->domain.already_flushed = true;
> +			domain_flush_tlb(&entry->dma_dom->domain);
> +		}
> +	}

There is also a race condition here I have to look into. It is not
introduced by your patch, but needs fixing anyway. I'll look into this
too.


Regards,

	Joerg



More information about the amd-gfx mailing list