[PATCH] iommu/amd: flush IOTLB for specific domains only (v3)
Nath, Arindam
Arindam.Nath at amd.com
Tue May 30 07:38:29 UTC 2017
>-----Original Message-----
>From: Joerg Roedel [mailto:joro at 8bytes.org]
>Sent: Monday, May 29, 2017 8:09 PM
>To: Nath, Arindam <Arindam.Nath at amd.com>; Lendacky, Thomas
><Thomas.Lendacky at amd.com>
>Cc: iommu at lists.linux-foundation.org; amd-gfx at lists.freedesktop.org;
>Deucher, Alexander <Alexander.Deucher at amd.com>; Bridgman, John
><John.Bridgman at amd.com>; drake at endlessm.com; Suthikulpanit, Suravee
><Suravee.Suthikulpanit at amd.com>; linux at endlessm.com; Craig Stein
><stein12c at gmail.com>; michel at daenzer.net; Kuehling, Felix
><Felix.Kuehling at amd.com>; stable at vger.kernel.org
>Subject: Re: [PATCH] iommu/amd: flush IOTLB for specific domains only (v3)
>
>Hi Arindam,
>
>I met Tom Lendacky last week in Nuremberg last week and he told me he is
>working on the same area of the code that this patch is for. His reason
>for touching this code was to solve some locking problems. Maybe you two
>can work together on a joint approach to improve this?
Sure Joerg, I will work with Tom.
Thanks.
>
>On Mon, May 22, 2017 at 01:18:01PM +0530, arindam.nath at amd.com wrote:
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 63cacf5..1edeebec 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -2227,15 +2227,26 @@ static struct iommu_group
>*amd_iommu_device_group(struct device *dev)
>>
>> static void __queue_flush(struct flush_queue *queue)
>> {
>> - struct protection_domain *domain;
>> - unsigned long flags;
>> int idx;
>>
>> - /* First flush TLB of all known domains */
>> - spin_lock_irqsave(&amd_iommu_pd_lock, flags);
>> - list_for_each_entry(domain, &amd_iommu_pd_list, list)
>> - domain_flush_tlb(domain);
>> - spin_unlock_irqrestore(&amd_iommu_pd_lock, flags);
>> + /* First flush TLB of all domains which were added to flush queue */
>> + for (idx = 0; idx < queue->next; ++idx) {
>> + struct flush_queue_entry *entry;
>> +
>> + entry = queue->entries + idx;
>> +
>> + /*
>> + * There might be cases where multiple IOVA entries for the
>> + * same domain are queued in the flush queue. To avoid
>> + * flushing the same domain again, we check whether the
>> + * flag is set or not. This improves the efficiency of
>> + * flush operation.
>> + */
>> + if (!entry->dma_dom->domain.already_flushed) {
>> + entry->dma_dom->domain.already_flushed = true;
>> + domain_flush_tlb(&entry->dma_dom->domain);
>> + }
>> + }
>
>There is also a race condition here I have to look into. It is not
>introduced by your patch, but needs fixing anyway. I'll look into this
>too.
>
>
>Regards,
>
> Joerg
More information about the amd-gfx
mailing list