[PATCH 2/4] drm/msm/dsi: Fix compressed word count calculation

Abhinav Kumar quic_abhinavk at quicinc.com
Tue May 9 04:36:25 UTC 2023



On 5/8/2023 5:47 PM, Dmitry Baryshkov wrote:
> On 09/05/2023 03:45, Abhinav Kumar wrote:
>>
>>
>> On 5/8/2023 4:27 PM, Dmitry Baryshkov wrote:
>>> On 08/05/2023 23:09, Abhinav Kumar wrote:
>>>>
>>>>
>>>> On 5/3/2023 1:26 AM, Dmitry Baryshkov wrote:
>>>>> On 03/05/2023 04:19, Jessica Zhang wrote:
>>>>>> Currently, word count is calculated using slice_count. This is 
>>>>>> incorrect
>>>>>> as downstream uses slice per packet, which is different from
>>>>>> slice_count.
>>>>>>
>>>>>> Slice count represents the number of soft slices per interface, 
>>>>>> and its
>>>>>> value will not always match that of slice per packet. For example, 
>>>>>> it is
>>>>>> possible to have cases where there are multiple soft slices per 
>>>>>> interface
>>>>>> but the panel specifies only one slice per packet.
>>>>>>
>>>>>> Thus, use the default value of one slice per packet and remove 
>>>>>> slice_count
>>>>>> from the word count calculation.
>>>>>>
>>>>>> Fixes: bc6b6ff8135c ("drm/msm/dsi: Use DSC slice(s) packet size to 
>>>>>> compute word count")
>>>>>> Signed-off-by: Jessica Zhang <quic_jesszhan at quicinc.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/msm/dsi/dsi_host.c | 9 ++++++++-
>>>>>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
>>>>>> b/drivers/gpu/drm/msm/dsi/dsi_host.c
>>>>>> index 35c69dbe5f6f..b0d448ffb078 100644
>>>>>> --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
>>>>>> +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
>>>>>> @@ -996,7 +996,14 @@ static void dsi_timing_setup(struct 
>>>>>> msm_dsi_host *msm_host, bool is_bonded_dsi)
>>>>>>           if (!msm_host->dsc)
>>>>>>               wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
>>>>>>           else
>>>>>> -            wc = msm_host->dsc->slice_chunk_size * 
>>>>>> msm_host->dsc->slice_count + 1;
>>>>>> +            /*
>>>>>> +             * When DSC is enabled, WC = slice_chunk_size * 
>>>>>> slice_per_packet + 1.
>>>>>> +             * Currently, the driver only supports default value 
>>>>>> of slice_per_packet = 1
>>>>>> +             *
>>>>>> +             * TODO: Expand drm_panel struct to hold 
>>>>>> slice_per_packet info
>>>>>> +             *       and adjust DSC math to account for 
>>>>>> slice_per_packet.
>>>>>
>>>>> slice_per_packet is not a part of the standard DSC, so I'm not sure 
>>>>> how that can be implemented. And definitely we should not care 
>>>>> about the drm_panel here. It should be either a part of 
>>>>> drm_dsc_config, or mipi_dsi_device.
>>>>>
>>>>
>>>> This is not correct.
>>>>
>>>> It is part of the DSI standard (not DSC standard). Please refer to 
>>>> Figure 40 "One Line Containing One Packet with Data from One or More 
>>>> Compressed Slices" and Figure 41 "One Line Containing More than One 
>>>> Compressed Pixel Stream Packet".
>>>
>>> I have reviewed section 8.8.24 and Annex D of the DSI standard.
>>>
>>> It is not clear to me, if we can get away with always using 
>>> slice_per_packet = 1. What is the DSI sink's difference between Fig. 
>>> 40.(b) and Fig 41?
>>>
>>
>> The difference is that in fig 40(b) there is only one packet of data 
>> (check closely, there is only one header).
>>
>> In fig 41, there are multiple headers so its showing multiple packets.
> 
> Yes, this is the description of the pictures. I mean what is the 
> functional difference?
> 

 From the same section of the spec:

"This is one method that can segregate slice-widths
when the receiver has multiple instantiations of decoders and this 
packet structure is used to identify slice-width boundaries"

 From this it seems like when there are multiple decoders then having 
individual multiple packets helps to identify the boundaries correctly 
due to the header/checksum boundaries. How exactly this works on the 
panel side, needs more investigation but outside the boundary of this 
discussion. But I think overall, unless really requested by the panel 
one slice_per_pkt is more optimal as it reduces packet overhead.

>>
>>> Are there are known panels that require slice_per_packet != 1? If so, 
>>> we will have to implement support for such configurations.
>>>
>>
>> Unless explicitly requested by the panel, we can use 1. From the 
>> device tree files of the panels we support downstream, I do see 
>> qcom,mdss-dsc-slice-per-pkt set to 2 for some panels. I dont know 
>> whether those panels will not work with 1. I really don't think any of 
>> the DSC panels working with MSM were upstreamed.
>>
>> I think the one jessica will be posting (and merging) will be the 
>> first and that works with 1.
>>
>> If there are other panels in the works which require 2 slice_per_pkt, 
>> I would wait to first see them on the list and if they cannot work 
>> with 1 slice_per_pkt, add support for that.
> 
> If slice_per_pkt change is localized - touching only few lines in DSI or 
> in msm_helpers. Otherwise we should consider having that from the 
> beginning (but hardcoded to 1 for now).
> 

Yes, from what I see downstream it touches only few lines in DSI and MSM 
helper. So should not be hard from MSM standpoint even in future, but 
the core changes would need agreement from the core maintainers.

>>
>>>> This has details about this. So I still stand by my point that this 
>>>> should be in the drm_panel.
>>>
>>> Note, the driver doesn't use drm_panel directly. So slices_per_packet 
>>> should go to mipi_dsi_device instead (which in turn can be filled 
>>> from e.g. drm_panel or from any other source).
>>>
>>>>
>>>>>> +             */
>>>>>> +            wc = msm_host->dsc->slice_chunk_size + 1;
>>>>>>           dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
>>>>>>               DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |
>>>>>>
>>>>>
>>>
> 


More information about the dri-devel mailing list