[PATCH v4] drm/vkms: Add support to 1D gamma LUT
Arthur Grillo Queiroz Cabral
arthurgrillo at riseup.net
Mon Jul 3 19:11:34 UTC 2023
On 02/07/23 18:37, Maira Canal wrote:
> On 6/27/23 05:12, Pekka Paalanen wrote:
>> On Mon, 26 Jun 2023 14:35:25 -0300
>> Maira Canal <mairacanal at riseup.net> wrote:
>>
>>> Hi Pekka,
>>>
>>> On 6/26/23 05:17, Pekka Paalanen wrote:
>>>> On Sat, 24 Jun 2023 18:48:08 -0300
>>>> Maira Canal <mairacanal at riseup.net> wrote:
>>>>
>>>>> Hi Arthur,
>>>>>
>>>>> Thanks for working on this feature for the VKMS!
>>>>>
>>>>> On 6/21/23 16:41, Arthur Grillo wrote:
>>>>>> Support a 1D gamma LUT with interpolation for each color channel on the
>>>>>> VKMS driver. Add a check for the LUT length by creating
>>>>>> vkms_atomic_check().
>>>>>>
>>>>>> Tested with:
>>>>>> igt at kms_color@gamma
>>>>>> igt at kms_color@legacy-gamma
>>>>>> igt at kms_color@invalid-gamma-lut-sizes
>>>>>
>>>>> Could you also mention that this will make it possible to run the test
>>>>> igt at kms_plane@pixel-format?
>>>>>
>>>>> Also, you mentioned to me that the performance degraded with this new
>>>>> feature, but I wasn't able to see it while running the VKMS CI. I
>>>>> performed a couple of tests and I didn't see any significant performance
>>>>> issue.
>>>>>
>>>>> Could you please run a benchmark and share the results with us? This way
>>>>> we can atest that this new feature will not affect significantly the
>>>>> VKMS performance. It would be nice to have a small brief of this
>>>>> benchmark on the commit message as well.
>>>>>
>>>>> Attesting that there isn't a performance issue and adding those nits to
>>>>> the commit message, you can add my
>>>>>
>>>>> Reviewed-by: Maíra Canal <mairacanal at riseup.net>
>>>>>
>>>>> on the next version.
>>>>
>>>> Hi,
>>>>
>>>> perfomance testing is good indeed. As future work, could there be a
>>>> document describing how to test VKMS performance?
>>>
>>> I'll try to select a couple of more meaningful IGT tests to describe how
>>> to test the VKMS performance and also add a document to describe how to
>>> run this tests.
>>>
>>> Recently, I added a VKMS must-pass testlist to IGT. This testlist
>>> tries to assure that regressions will not be introduced into VKMS. But,
>>> I failed to introduce a documentation on the kernel side pointing to
>>> this new testlist... I'll also work on it.
>>>
>>>>
>>>> "I ran IGT at blah 100 times and it took xx seconds before and yy seconds
>>>> after" does not really give someone like me an idea of what was
>>>> actually measured. For example blending overhead increase could be
>>>> completely lost in opaque pixel copying noise if the test case has only
>>>> few pixels to blend, e.g. a cursor plane, not to mention the overhead
>>>> of launching an IGT test in the first place.
>>>
>>> About the IGT overhead, I don't know exactly how we could escape from
>>> it. Maybe writing KUnit tests to the VKMS's composition functions, such
>>> as blend(). Anyway, we would have the overhead of the KUnit framework.
>>> I mean, for whatever framework we choose, there'll be an overhead...
>>>
>>> Do you have any other ideas on how to test VKMS with less overhead?
>>
>> Maybe put the repeat loop and time measurement inside the code of a few
>> chosen IGT tests?
>>
>> So that it loops only the KMS programming and somehow ensures VKMS has
>> finished processing each update before doing the next cycle. I presume
>> VKMS does not have a timer-based refresh cycle that might add CPU idle
>> time? Writeback should be included in the measurement too, but inspecting
>> writeback results should not.
>>
>> Once all that is in place, then each performance test needs to use
>> appropriate operations. E.g. if testing blending performance, use
>> almost full-screen planes.
>
> ^ Grillo, any chance you could work on something like this for the
> performance measurements?
>
Yeah, I can do something like this. Sorry for the delay I think I will have
those measurements on this week.
~Grillo
>>
>> What's the overhead of KUnit framework? Can you not do the same there,
>> put the repeat loop and time measurement inside the test to cover only
>> the interesting code?
>>
>> Unit-testing the composition function performance might be ideal.
>>
>
> I'll try to work on some unit tests for, at least, the composition
> section of VKMS. I believe that they will be very valuable for the
> maintenance and performance evaluation.
>
> Thanks for your valuable inputs on the VKMS!
>
> Best Regards,
> - Maíra
>
>> Depending on the type of test, if the CRTC mode and planes are big
>> enough, maybe there is no need to repeat even. But testing presumably
>> fast things like moving a cursor plane will likely need repeating in
>> order to produce stable numbers.
>>
>>
>> Thanks,
>> pq
>>
>>>
>>> Best Regards,
>>> - Maíra
>>>
>>>>
>>>> Something that would guide new developers in running meaningful
>>>> benchmarks would be nice.
>>>>
>>>> Should e.g. IGT have explicit (VKMS) performance tests that need to be
>>>> run manually, since evaluation of the result is not feasible
>>>> automatically? Or a benchmark mode in correctness tests that would run
>>>> the identical operation N times and measure the time before checking
>>>> for correctness?
>>>>
>>>> The correctness verification in IGT tests, if done by image comparison
>>>> which they undoubtedly will need to be in the future, may dominate the
>>>> CPU run time measurements if included.
>>>>
>>>>
>>>> Thanks,
>>>> pq
>>
More information about the dri-devel
mailing list