[Freedreno] [PATCH] drm/msm: Only enable A6xx LLCC code on A6xx
Sai Prakash Ranjan
saiprakash.ranjan at codeaurora.org
Tue Jan 12 06:49:27 UTC 2021
Hi Jordan,
On 2021-01-11 21:41, Jordan Crouse wrote:
> On Mon, Jan 11, 2021 at 09:54:12AM +0530, Sai Prakash Ranjan wrote:
>> Hi Rob,
>>
>> On 2021-01-08 22:16, Rob Clark wrote:
>> >On Fri, Jan 8, 2021 at 6:05 AM Sai Prakash Ranjan
>> ><saiprakash.ranjan at codeaurora.org> wrote:
>> >>
>> >>On 2021-01-08 19:09, Konrad Dybcio wrote:
>> >>>> Konrad, can you please test this below change without your change?
>> >>>
>> >>> This brings no difference, a BUG still happens. We're still calling
>> >>> to_a6xx_gpu on ANY device that's probed! Too bad it won't turn my A330
>> >>> into an A640..
>> >>>
>> >>> Also, relying on disabling LLCC in the config is out of question as it
>> >>> makes the arm32 kernel not compile with DRM/MSM and it just removes
>> >>> the functionality on devices with a6xx.. (unless somebody removes the
>> >>> dependency on it, which in my opinion is even worse and will cause
>> >>> more problems for developers!).
>> >>>
>> >>
>> >>Disabling LLCC is not the suggestion, I was under the impression that
>> >>was the cause here for the smmu bug. Anyways, the check for llc slice
>> >>in case llcc is disabled is not correct as well. I will send a patch for
>> >>that as well.
>> >>
>> >>> The bigger question is how and why did that piece of code ever make it
>> >>> to adreno_gpu.c and not a6xx_gpu.c?
>> >>>
>> >>
>> >>My mistake, I will move it.
>> >
>> >Thanks, since we don't have kernel-CI coverage for gpu, and there
>> >probably isn't one person who has all the different devices supported
>> >(or enough hours in the day to test them all), it is probably
>> >better/safer to keep things in the backend code that is specific to a
>> >given generation.
>> >
>>
>> Agreed, I will post this change soon and will introduce some feature
>> check as well because we will need it for iommu prot flag as per
>> discussion
>> here -
>> https://lore.kernel.org/lkml/20210108181830.GA5457@willie-the-truck/
>>
>> >>> To solve it in a cleaner way I propose to move it to an a6xx-specific
>> >>> file, or if it's going to be used with next-gen GPUs, perhaps manage
>> >>> calling of this code via an adreno quirk/feature in adreno_device.c.
>> >>> Now that I think about it, A5xx GPMU en/disable could probably managed
>> >>> like that, instead of using tons of if-statements for each GPU model
>> >>> that has it..
>> >>>
>> >>> While we're at it, do ALL (and I truly do mean ALL, including the
>> >>> low-end ones, this will be important later on) A6xx GPUs make use of
>> >>> that feature?
>> >>>
>> >>
>> >>I do not have a list of all A6XX GPUs with me currently, but from what
>> >>I know, A618, A630, A640, A650 has the support.
>> >>
>> >
>> >From the PoV of bringing up new a6xx, we should probably consider that
>> >some of them may not *yet* have LLCC enabled. I have an 8cx laptop
>> >and once I find time to get the display working, the next step would
>> >be bringing up a680.. and I'd probably like to start without LLCC..
>> >
>>
>> Right, once I move the LLCC code to a6xx specific address space
>> creation,
>> without LLCC slices for GPU specified in qcom llcc driver, we will not
>> be using it.
>
> Right. The problem here was that we were assuming an a6xx container in
> generic
> code. Testing the existence of LLCC or not is a different problem but
> it is my
> understanding that if we set the attribute without LLCC enabled it just
> gets
> ignored. Is that correct Sai?
>
Yes that is correct, I just confirmed now with LLCC team.
Thanks,
Sai
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member
of Code Aurora Forum, hosted by The Linux Foundation
More information about the Freedreno
mailing list