[PATCH v5 02/16] drm/msm/dpu: fix error condition in dpu_encoder_virt_atomic_mode_set

Abhinav Kumar quic_abhinavk at quicinc.com
Tue Jul 16 22:24:58 UTC 2024



On 7/13/2024 2:49 AM, Dmitry Baryshkov wrote:
> On Sat, 13 Jul 2024 at 03:25, Abhinav Kumar <quic_abhinavk at quicinc.com> wrote:
>>
>>
>>
>> On 7/12/2024 4:11 PM, Dmitry Baryshkov wrote:
>>> On Fri, 12 Jul 2024 at 22:41, Abhinav Kumar <quic_abhinavk at quicinc.com> wrote:
>>>> On 6/24/2024 2:13 PM, Dmitry Baryshkov wrote:
>>>>> The commit b954fa6baaca ("drm/msm/dpu: Refactor rm iterator") removed
>>>>> zero-init of the hw_ctl array, but didn't change the error condition,
>>>>> that checked for hw_ctl[i] being NULL. Use indices check instead.
>>>>>
>>>>> Fixes: b954fa6baaca ("drm/msm/dpu: Refactor rm iterator")
>>>>> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov at linaro.org>
>>>>> ---
>>>>>     drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 2 +-
>>>>>     1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> index 5d205e09cf45..7613005fbfea 100644
>>>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> @@ -1186,7 +1186,7 @@ static void :tag(struct drm_encoder *drm_enc,
>>>>>                         return;
>>>>>                 }
>>>>>
>>>>> -             if (!hw_ctl[i]) {
>>>>> +             if (i >= num_ctl) {
>>>>
>>>> This is not very clear to me.
>>>>
>>>> How will we hit this condition? I dont see i going beyond 1 in this loop
>>>> and neither should num_ctl
>>>
>>> Why? the driver doesn't support flushing through a single CTL, so
>>> num_ctl = num_intf.
>>>
>>
>> num_ctl will be = num_intf, but what I was trying to understand here is
>> that , previously this condition was making sure that we have a ctl
>> assigned for each physical encoder which is actually a requirement for
>> the display pipeline. If we assigned a hw_ctl for one phys encoder and
>> not the other, its an error.
>>
>> But on closer look, I think even your check will catch that.
>>
>>
>>>>
>>>> Will it be just easier to bring back the NULL assignment at the top?
>>>>
>>>> struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC] = { NULL };
>>>>
>>>> I also see the same issue for other blocks such as hw_dsc, hw_lm
>>>
>>> Other blocks loop properly up to the num_resource. I'd prefer to drop
>>> the NULL init from the DSPP init and use num_dspp instead.
>>>
>>
>> Overall, I think the purpose of NULL init was to make sure that before
>> we call to_dpu_hw_***() macros, we have a valid hw_*.
>>
>> We could use either num_* or the hw_* as both are returned by RM.
>>
>> One side-note here is with a proper NULL hw_ctl is that the consumers of
>> hw_ctl should also be able to check for NULL correctly.
> 
> The problem of the NULL checks is that it's too tempting to perform a
> NULL check after to_dpu_hw_ctl conversion. However it's not safe to
> pass NULL pointer to such functions: there is no guarantee that
> conversion will return NULL if it gets passed the NULL pointer.
> 

Yes, thats why these checks are there before calling to_dpu_hw_ctl() to 
make sure we dont pass NULL there.

>> So for example dpu_encoder_phys layers use if (!phys->hw_ctl) checks but
>> today we do not set phys->hw_ctl to NULL correctly.
>>
>> Do you think that instead of the return statements, we should do
>> something like
>>
>> dpu_enc->hw_ctl = i < num_ctl ?
>>          to_dpu_hw_ctl(hw_ctl[i]) : NULL;
> 
> Yeah, why not.
> 
> Generally, I think we should stop storing the state-related data in
> the non-state structures. Hopefully I'll have time for that at some
> point later on.
> 
>>
>>
>> But this will need the NULL init back.
> 
> It doesn't, as you have the comparison.
> 

Ack, yes thats true. Lets do it this way then. I am fine with that.

>>
>>>>
>>>>>                         DPU_ERROR_ENC(dpu_enc,
>>>>>                                 "no ctl block assigned at idx: %d\n", i);
>>>>>                         return;
>>>>>
>>>
>>>
>>>
> 
> 
> 


More information about the Freedreno mailing list