[PATCH 5/5] drm/msm: Skip tlbinv on unmap from non-current pgtables

Akhil P Oommen quic_akhilpo at quicinc.com
Thu Aug 25 18:12:14 UTC 2022


On 8/25/2022 12:32 AM, Rob Clark wrote:
> On Wed, Aug 24, 2022 at 10:46 AM Akhil P Oommen
> <quic_akhilpo at quicinc.com> wrote:
>> On 8/21/2022 11:49 PM, Rob Clark wrote:
>>> From: Rob Clark <robdclark at chromium.org>
>>>
>>> We can rely on the tlbinv done by CP_SMMU_TABLE_UPDATE in this case.
>>>
>>> Signed-off-by: Rob Clark <robdclark at chromium.org>
>>> ---
>>>    drivers/gpu/drm/msm/adreno/a6xx_gpu.c |  6 ++++++
>>>    drivers/gpu/drm/msm/msm_iommu.c       | 29 +++++++++++++++++++++++++++
>>>    2 files changed, 35 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>> index c8ad8aeca777..1ba0ed629549 100644
>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>> @@ -1180,6 +1180,12 @@ static int hw_init(struct msm_gpu *gpu)
>>>        /* Always come up on rb 0 */
>>>        a6xx_gpu->cur_ring = gpu->rb[0];
>>>
>>> +     /*
>>> +      * Note, we cannot assume anything about the state of the SMMU when
>>> +      * coming back from power collapse, so force a CP_SMMU_TABLE_UPDATE
>>> +      * on the first submit.  Also, msm_iommu_pagetable_unmap() relies on
>>> +      * this behavior.
>>> +      */
>>>        gpu->cur_ctx_seqno = 0;
>>>
>>>        /* Enable the SQE_to start the CP engine */
>>> diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
>>> index 94c8c09980d1..218074a58081 100644
>>> --- a/drivers/gpu/drm/msm/msm_iommu.c
>>> +++ b/drivers/gpu/drm/msm/msm_iommu.c
>>> @@ -45,8 +45,37 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova,
>>>                size -= 4096;
>>>        }
>>>
>>> +     /*
>>> +      * A CP_SMMU_TABLE_UPDATE is always sent for the first
>>> +      * submit after resume, and that does a TLB invalidate.
>>> +      * So we can skip that if the device is not currently
>>> +      * powered.
>>> +      */
>>> +     if (!pm_runtime_get_if_in_use(pagetable->parent->dev))
>>> +             goto out;
>>> +
>>> +     /*
>>> +      * If we are not the current pgtables, we can rely on the
>>> +      * TLB invalidate done by CP_SMMU_TABLE_UPDATE.
>>> +      *
>>> +      * We'll always be racing with the GPU updating ttbr0,
>>> +      * but there are only two cases:
>>> +      *
>>> +      *  + either we are not the the current pgtables and there
>>> +      *    will be a tlbinv done by the GPU before we are again
>>> +      *
>>> +      *  + or we are.. there might have already been a tblinv
>>> +      *    if we raced with the GPU, but we have to assume the
>>> +      *    worse and do the tlbinv
>>> +      */
>>> +     if (adreno_smmu->get_ttbr0(adreno_smmu->cookie) != pagetable->ttbr)
>>> +             goto out_put;
>>> +
>>>        adreno_smmu->tlb_inv_by_id(adreno_smmu->cookie, pagetable->asid);
>>>
>>> +out_put:
>>> +     pm_runtime_put(pagetable->parent->dev);
>>> +out:
>>>        return (unmapped == size) ? 0 : -EINVAL;
>>>    }
>>>
>> Asking because it is a *security issue* if we get this wrong:
>> 1. Is there any measure benefit with this patch? I believe tlb
>> invalidation doesn't contribute much to the unmap latency.
> It turned out to not make a huge difference.. although I expect the
> part about skipping the inv when runtime suspended is still useful
> from a power standpoint (but don't have a great setup to measure that)
Agree. Perhaps use the recently added 'suspended' flag instead of 
pm_runtime_get_if_in_use().

-Akhil.
>
> BR,
> -R
>
>> 2. We at least should insert a full memory barrier before reading the
>> ttbr0 register to ensure that everything we did prior to that is visible
>> to smmu. But then I guess the cost of the full barrier would be similar
>> to the tlb invalidation.
>>
>> Because it could lead to security issues or other very hard to debug
>> issues, I would prefer this optimization only if there is a significant
>> measurable gain.
>>
>> -Akhil.
>>



More information about the dri-devel mailing list