[PATCH v2 1/2] drm/msm/adreno: Introduce ADRENO_QUIRK_NO_SYSCACHE
Rob Clark
robdclark at gmail.com
Wed Dec 4 21:50:36 UTC 2024
On Wed, Dec 4, 2024 at 1:47 PM Rob Clark <robdclark at gmail.com> wrote:
>
> On Wed, Dec 4, 2024 at 11:04 AM Akhil P Oommen <quic_akhilpo at quicinc.com> wrote:
> >
> > On 12/1/2024 10:06 PM, Rob Clark wrote:
> > > On Sat, Nov 30, 2024 at 12:30 PM Akhil P Oommen
> > > <quic_akhilpo at quicinc.com> wrote:
> > >>
> > >> On 11/30/2024 7:01 PM, Konrad Dybcio wrote:
> > >>> On 25.11.2024 5:33 PM, Akhil P Oommen wrote:
> > >>>> There are a few chipsets which don't have system cache a.k.a LLC.
> > >>>> Currently, the assumption in the driver is that the system cache
> > >>>> availability correlates with the presence of GMU or RPMH, which
> > >>>> is not true. For instance, Snapdragon 6 Gen 1 has RPMH and a GPU
> > >>>> with a full blown GMU, but doesnot have a system cache. So,
> > >>>> introduce an Adreno Quirk flag to check support for system cache
> > >>>> instead of using gmu_wrapper flag.
> > >>>>
> > >>>> Signed-off-by: Akhil P Oommen <quic_akhilpo at quicinc.com>
> > >>>> ---
> > >>>> drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 3 ++-
> > >>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 7 +------
> > >>>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 +
> > >>>> 3 files changed, 4 insertions(+), 7 deletions(-)
> > >>>>
> > >>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
> > >>>> index 0c560e84ad5a..5e389f6b8b8a 100644
> > >>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
> > >>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
> > >>>> @@ -682,6 +682,7 @@ static const struct adreno_info a6xx_gpus[] = {
> > >>>> },
> > >>>> .gmem = (SZ_128K + SZ_4K),
> > >>>> .inactive_period = DRM_MSM_INACTIVE_PERIOD,
> > >>>> + .quirks = ADRENO_QUIRK_NO_SYSCACHE,
> > >>>> .init = a6xx_gpu_init,
> > >>>> .zapfw = "a610_zap.mdt",
> > >>>> .a6xx = &(const struct a6xx_info) {
> > >>>> @@ -1331,7 +1332,7 @@ static const struct adreno_info a7xx_gpus[] = {
> > >>>> },
> > >>>> .gmem = SZ_128K,
> > >>>> .inactive_period = DRM_MSM_INACTIVE_PERIOD,
> > >>>> - .quirks = ADRENO_QUIRK_HAS_HW_APRIV,
> > >>>> + .quirks = ADRENO_QUIRK_HAS_HW_APRIV | ADRENO_QUIRK_NO_SYSCACHE,
> > >>>> .init = a6xx_gpu_init,
> > >>>> .zapfw = "a702_zap.mbn",
> > >>>> .a6xx = &(const struct a6xx_info) {
> > >>>
> > >>> +a619_holi
> > >>>
> > >>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > >>>> index 019610341df1..a8b928d0f320 100644
> > >>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > >>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > >>>> @@ -1863,10 +1863,6 @@ static void a7xx_llc_activate(struct a6xx_gpu *a6xx_gpu)
> > >>>>
> > >>>> static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu)
> > >>>> {
> > >>>> - /* No LLCC on non-RPMh (and by extension, non-GMU) SoCs */
> > >>>> - if (adreno_has_gmu_wrapper(&a6xx_gpu->base))
> > >>>> - return;
> > >>>> -
> > >>>> llcc_slice_putd(a6xx_gpu->llc_slice);
> > >>>> llcc_slice_putd(a6xx_gpu->htw_llc_slice);
> > >>>> }
> > >>>> @@ -1876,8 +1872,7 @@ static void a6xx_llc_slices_init(struct platform_device *pdev,
> > >>>> {
> > >>>> struct device_node *phandle;
> > >>>>
> > >>>> - /* No LLCC on non-RPMh (and by extension, non-GMU) SoCs */
> > >>>> - if (adreno_has_gmu_wrapper(&a6xx_gpu->base))
> > >>>> + if (a6xx_gpu->base.info->quirks & ADRENO_QUIRK_NO_SYSCACHE)
> > >>>> return;
> > >>>
> > >>> I think A612 is the "quirky" one here.. it has some sort of a GMU,
> > >>> but we're choosing not to implement it. maybe a check for
> > >>>
> > >>> if (adreno_has_gmu_wrapper && !adreno_is_a612)
> > >>>
> > >>> would be clearer here, with a comment that RGMU support is not
> > >>> implemented
> > >>>
> > >>>
> > >>>
> > >>> But going further, I'm a bit concerned about dt-bindings.. If we
> > >>> implement RGMU on the driver side in the future, that will require
> > >>> DT changes which will make the currently proposed description invalid.
> > >>>
> > >>> I think a better angle would be to add a adreno_has_rgmu() func with
> > >>> a qcom,adreno-rgmu compatible and plumb it correctly from the get-go.
> > >>>
> > >>> This way, we can avoid this syscache quirk as well.
> > >>>
> > >>
> > >> I am aware of at least Adreno 710 which doesn't have syscache, but has
> > >> proper GMU. And I don't see any reason why there couldn't be another one
> > >> in future to save silicon area. So, a quirk flag doesn't seem so bad in
> > >> this case.
> > >>
> > >> The correct way to avoid this quirk flag is by making LLCC driver return
> > >> a proper error to detect the absence of syscache. Currently, it just
> > >> returns EPROBE_DEFER which put driver in an infinite probe loop.
> > >
> > > Hmm, this seems solvable? llcc has a node in the dt, so it seems like
> > > it should be able to tell the difference between not existing and not
> > > being probed yet. Something maybe like, initialize drv_data to NULL
> > > instead of -EPROBE_DEFER, and then in the various entry points, if
> > > (!drv_data) return not_probed_helper(); which would check if a
> > > compatible node exists in dt?
> >
> > Sounds like that would work. Can we explore that separately?
> >
> > I am a bit worried about adding another subsystem's patch to this
> > series. That might delay this series by weeks.
>
> I don't think there is a dependency between the two, so it shouldn't
> delay anything. We can just merge the first patch in this series as
> it is and drop the second. And the llcc change is a legit bug fix,
> IMO, -EPROBE_DEFER is the incorrect return value when the device is
> not present.
Actually, that wasn't _quite_ correct, but the idea still stands.
Re-send second patch, but without the ADRENO_QUIRK_NO_SYSCACHE parts,
and drop the first.
In parallel send the llcc fix. There is no compile time dependency,
so they can go thru different trees.
BR,
-R
>
> BR,
> -R
>
> > -Akhil
> >
> > >
> > > BR,
> > > -R
> > >
> > >> Agree about the dt binding suggestion. I will define a new compatible
> > >> string for rgmu.
> > >>
> > >> -Akhil.
> > >>
> > >>> Konrad
> > >>
> >
More information about the Freedreno
mailing list