[PATCH] iommu/amd: flush IOTLB for specific domains only (v3)
Deucher, Alexander
Alexander.Deucher at amd.com
Tue May 23 18:24:57 UTC 2017
> -----Original Message-----
> From: Arindam Nath [mailto:anath.amd at gmail.com] On Behalf Of
> arindam.nath at amd.com
> Sent: Monday, May 22, 2017 3:48 AM
> To: iommu at lists.linux-foundation.org
> Cc: amd-gfx at lists.freedesktop.org; Joerg Roedel; Deucher, Alexander;
> Bridgman, John; drake at endlessm.com; Suthikulpanit, Suravee;
> linux at endlessm.com; Craig Stein; michel at daenzer.net; Kuehling, Felix;
> stable at vger.kernel.org; Nath, Arindam
> Subject: [PATCH] iommu/amd: flush IOTLB for specific domains only (v3)
>
> From: Arindam Nath <arindam.nath at amd.com>
>
> Change History
> --------------
>
> v3:
> - add Fixes and CC tags
> - add link to Bugzilla
>
> v2: changes suggested by Joerg
> - add flush flag to improve efficiency of flush operation
>
> v1:
> - The idea behind flush queues is to defer the IOTLB flushing
> for domains for which the mappings are no longer valid. We
> add such domains in queue_add(), and when the queue size
> reaches FLUSH_QUEUE_SIZE, we perform __queue_flush().
>
> Since we have already taken lock before __queue_flush()
> is called, we need to make sure the IOTLB flushing is
> performed as quickly as possible.
>
> In the current implementation, we perform IOTLB flushing
> for all domains irrespective of which ones were actually
> added in the flush queue initially. This can be quite
> expensive especially for domains for which unmapping is
> not required at this point of time.
>
> This patch makes use of domain information in
> 'struct flush_queue_entry' to make sure we only flush
> IOTLBs for domains who need it, skipping others.
>
> Bugzilla: https://bugs.freedesktop.org/101029
> Fixes: b1516a14657a ("iommu/amd: Implement flush queue")
> Cc: stable at vger.kernel.org
> Suggested-by: Joerg Roedel <joro at 8bytes.org>
> Signed-off-by: Arindam Nath <arindam.nath at amd.com>
Acked-by: Alex Deucher <alexander.deucher at amd.com>
> ---
> drivers/iommu/amd_iommu.c | 27 ++++++++++++++++++++-------
> drivers/iommu/amd_iommu_types.h | 2 ++
> 2 files changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 63cacf5..1edeebec 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -2227,15 +2227,26 @@ static struct iommu_group
> *amd_iommu_device_group(struct device *dev)
>
> static void __queue_flush(struct flush_queue *queue)
> {
> - struct protection_domain *domain;
> - unsigned long flags;
> int idx;
>
> - /* First flush TLB of all known domains */
> - spin_lock_irqsave(&amd_iommu_pd_lock, flags);
> - list_for_each_entry(domain, &amd_iommu_pd_list, list)
> - domain_flush_tlb(domain);
> - spin_unlock_irqrestore(&amd_iommu_pd_lock, flags);
> + /* First flush TLB of all domains which were added to flush queue */
> + for (idx = 0; idx < queue->next; ++idx) {
> + struct flush_queue_entry *entry;
> +
> + entry = queue->entries + idx;
> +
> + /*
> + * There might be cases where multiple IOVA entries for the
> + * same domain are queued in the flush queue. To avoid
> + * flushing the same domain again, we check whether the
> + * flag is set or not. This improves the efficiency of
> + * flush operation.
> + */
> + if (!entry->dma_dom->domain.already_flushed) {
> + entry->dma_dom->domain.already_flushed = true;
> + domain_flush_tlb(&entry->dma_dom->domain);
> + }
> + }
>
> /* Wait until flushes have completed */
> domain_flush_complete(NULL);
> @@ -2289,6 +2300,8 @@ static void queue_add(struct dma_ops_domain
> *dma_dom,
> pages = __roundup_pow_of_two(pages);
> address >>= PAGE_SHIFT;
>
> + dma_dom->domain.already_flushed = false;
> +
> queue = get_cpu_ptr(&flush_queue);
> spin_lock_irqsave(&queue->lock, flags);
>
> diff --git a/drivers/iommu/amd_iommu_types.h
> b/drivers/iommu/amd_iommu_types.h
> index 4de8f41..4f5519d 100644
> --- a/drivers/iommu/amd_iommu_types.h
> +++ b/drivers/iommu/amd_iommu_types.h
> @@ -454,6 +454,8 @@ struct protection_domain {
> bool updated; /* complete domain flush required */
> unsigned dev_cnt; /* devices assigned to this domain */
> unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference
> count */
> + bool already_flushed; /* flag to avoid flushing the same domain
> again
> + in a single invocation of __queue_flush() */
> };
>
> /*
> --
> 2.7.4
More information about the amd-gfx
mailing list