[Freedreno] [PATCH v13 1/4] iommu/arm-smmu: Add pm_runtime/sleep ops

Vivek Gautam vivek.gautam at codeaurora.org
Fri Jul 27 05:05:46 UTC 2018



On 7/26/2018 9:00 PM, Robin Murphy wrote:
> On 26/07/18 08:12, Vivek Gautam wrote:
>> On Wed, Jul 25, 2018 at 11:46 PM, Vivek Gautam
>> <vivek.gautam at codeaurora.org> wrote:
>>> On Tue, Jul 24, 2018 at 8:51 PM, Robin Murphy <robin.murphy at arm.com> 
>>> wrote:
>>>> On 19/07/18 11:15, Vivek Gautam wrote:
>>>>>
>>>>> From: Sricharan R <sricharan at codeaurora.org>
>>>>>
>>>>> The smmu needs to be functional only when the respective
>>>>> master's using it are active. The device_link feature
>>>>> helps to track such functional dependencies, so that the
>>>>> iommu gets powered when the master device enables itself
>>>>> using pm_runtime. So by adapting the smmu driver for
>>>>> runtime pm, above said dependency can be addressed.
>>>>>
>>>>> This patch adds the pm runtime/sleep callbacks to the
>>>>> driver and also the functions to parse the smmu clocks
>>>>> from DT and enable them in resume/suspend.
>>>>>
>>>>> Also, while we enable the runtime pm add a pm sleep suspend
>>>>> callback that pushes devices to low power state by turning
>>>>> the clocks off in a system sleep.
>>>>> Also add corresponding clock enable path in resume callback.
>>>>>
>>>>> Signed-off-by: Sricharan R <sricharan at codeaurora.org>
>>>>> Signed-off-by: Archit Taneja <architt at codeaurora.org>
>>>>> [vivek: rework for clock and pm ops]
>>>>> Signed-off-by: Vivek Gautam <vivek.gautam at codeaurora.org>
>>>>> Reviewed-by: Tomasz Figa <tfiga at chromium.org>
>>>>> ---
>>>>>
>>>>> Changes since v12:
>>>>>    - Added pm sleep .suspend callback. This disables the clocks.
>>>>>    - Added corresponding change to enable clocks in .resume
>>>>>     pm sleep callback.
>>>>>
>>>>>    drivers/iommu/arm-smmu.c | 75
>>>>> ++++++++++++++++++++++++++++++++++++++++++++++--
>>>>>    1 file changed, 73 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
>>>>> index c73cfce1ccc0..9138a6fffe04 100644
>>>>> --- a/drivers/iommu/arm-smmu.c
>>>>> +++ b/drivers/iommu/arm-smmu.c
>>
>> [snip]
>>
>>>>> platform_device *pdev)
>>>>>          arm_smmu_device_remove(pdev);
>>>>>    }
>>>>>    +static int __maybe_unused arm_smmu_runtime_resume(struct 
>>>>> device *dev)
>>>>> +{
>>>>> +       struct arm_smmu_device *smmu = dev_get_drvdata(dev);
>>>>> +
>>>>> +       return clk_bulk_enable(smmu->num_clks, smmu->clks);
>>>>
>>>>
>>>> If there's a power domain being automatically switched by genpd 
>>>> then we need
>>>> a reset here because we may have lost state entirely. Since I 
>>>> remembered the
>>>> otherwise-useless GPU SMMU on Juno is in a separate power domain, I 
>>>> gave it
>>>> a poking via sysfs with some debug stuff to dump sCR0 in these 
>>>> callbacks,
>>>> and the problem is clear:
>>>>
>>>> ...
>>>> [    4.625551] arm-smmu 2b400000.iommu: genpd_runtime_suspend()
>>>> [    4.631163] arm-smmu 2b400000.iommu: arm_smmu_runtime_suspend: 
>>>> 0x00201936
>>>> [    4.637897] arm-smmu 2b400000.iommu: suspend latency exceeded, 
>>>> 6733980 ns
>>>> [   21.566983] arm-smmu 2b400000.iommu: genpd_runtime_resume()
>>>> [   21.584796] arm-smmu 2b400000.iommu: arm_smmu_runtime_resume: 
>>>> 0x00220101
>>>> [   21.591452] arm-smmu 2b400000.iommu: resume latency exceeded, 
>>>> 6658020 ns
>>>> ...
>>>
>>> Qualcomm SoCs have retention enabled for SMMU registers so they don't
>>> lose state.
>>> ...
>>> [  256.013367] arm-smmu b40000.arm,smmu: arm_smmu_runtime_suspend
>>> SCR0 = 0x201e36
>>> [  256.013367]
>>> [  256.019160] arm-smmu b40000.arm,smmu: arm_smmu_runtime_resume
>>> SCR0 = 0x201e36
>>> [  256.019160]
>>> [  256.027368] arm-smmu b40000.arm,smmu: arm_smmu_runtime_suspend
>>> SCR0 = 0x201e36
>>> [  256.027368]
>>> [  256.036786] arm-smmu b40000.arm,smmu: arm_smmu_runtime_resume
>>> SCR0 = 0x201e36
>>> ...
>>>
>>> However after adding arm_smmu_device_reset() in runtime_resume() I 
>>> observe
>>> some performance degradation when kill an instance of 'kmscube' and
>>> start it again.
>>> The launch time with arm_smmu_device_reset() in runtime_resume() 
>>> change is
>>> more.
>>> Could this be because of frequent TLB invalidation and sync?
>
> Probably. Plus the reset procedure is a big chunk of MMIO accesses, 
> which for a non-trivial SMMU configuration probably isn't negligible 
> in itself. Unfortunately, unless you know for absolute certain that 
> you don't need to do that, you do.
>
>> Some more information that i gathered.
>> On Qcom SoCs besides the registers retention, TCU invalidates TLB 
>> cache on
>> a CX power collapse exit, which is the system wide suspend case.
>> The arm-smmu software is not aware of this CX power collapse /
>> auto-invalidation.
>>
>> So wouldn't doing an explicit TLB invalidations during runtime resume be
>> detrimental to performance?
>
> Indeed it would be, but resuming with TLBs full of random 
> valid-looking junk is even more so.
>
>> I have one more doubt here -
>> We do runtime power cycle around arm_smmu_map/unmap() too.
>> Now during map/unmap we selectively do TLB maintenance (either
>> tlb_sync or tlb_add_flush).
>> But with runtime pm we want to do TLBIALL*. Is that a problem?
>
> It's technically redundant to do both, true, but as we've covered in 
> previous rounds of discussion it's very difficult to know *which* one 
> is sufficient at any given time, so in order to make progress for now 
> I think we have to settle with doing both.

Thanks Robin. I will respin the patches as Tomasz also suggested;

arm_smmu_runtime_resume() will look like:

     if (pm_runtime_suspended(dev))
         return 0;
     return arm_smmu_runtime_resume(dev);

and,
arm_smmu_runtime_resume() will have arm_smmu_device_reset().

Best regards
Vivek
>
> Robin.
> -- 
> To unsubscribe from this list: send the line "unsubscribe 
> linux-arm-msm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/freedreno/attachments/20180727/04e201f6/attachment-0001.html>


More information about the Freedreno mailing list