[Intel-xe] [PATCH 1/7] drm/xe: Add basic unit tests for rtp

Lucas De Marchi lucas.demarchi at intel.com
Mon Apr 3 13:02:30 UTC 2023


On Mon, Apr 03, 2023 at 09:38:01AM +0200, Michal Wajdeczko wrote:
>
>
>On 02.04.2023 23:54, Lucas De Marchi wrote:
>> On Sun, Apr 02, 2023 at 01:17:50PM +0200, Michal Wajdeczko wrote:
>>>
>>>
>>> On 02.04.2023 03:34, Lucas De Marchi wrote:
>>>> On Sat, Apr 01, 2023 at 08:26:24PM +0200, Michał Winiarski wrote:
>>>>> On Sat, Apr 01, 2023 at 01:54:59AM -0700, Lucas De Marchi wrote:
>>>>>> On Fri, Mar 31, 2023 at 08:34:21AM -0600, Lucas De Marchi wrote:
>>>>>> > On Mon, Mar 27, 2023 at 07:56:01PM +0200, Michał Winiarski wrote:
>>>>>> > > On Tue, Mar 21, 2023 at 03:05:21PM -0700, Lucas De Marchi wrote:
>>>>>> > > > +EXPORT_SYMBOL(xe_rtp_process_incompat_types);
>>>>>> > > > diff --git a/drivers/gpu/drm/xe/tests/xe_rtp_test.c
>>>>>> b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
>>>>>> > > > new file mode 100644
>>>>>> > > > index 000000000000..4a12aad3f759
>>>>>> > > > --- /dev/null
>>>>>> > > > +++ b/drivers/gpu/drm/xe/tests/xe_rtp_test.c
>>>>>> > >
>>>>>> > > There's no need to have this as a separate file.
>>>>>> > > We can move the contents of tests/xe_rtp.c into
>>>>>> tests/xe_rtp_test.c and
>>>>>> > > include tests/xe_rtp_test.c directly in xe_rtp.c
>>>>>> >
>>>>>> > this is not how the tests are currently structured.
>>>>>> >
>>>>>> > xe_rtp.c includes tests/xe_rtp.c. The latter can be seen as "an
>>>>>> > extension of the that file, exporting whatever entrypoint is needed
>>>>>> > for the real test". They are then part of the xe.ko module.
>>>>>> >
>>>>>> > tests/xe_rtp_test.c is the one in the separate test module, and
>>>>>> contains
>>>>>> > the integration with kunit.
>>>>>>
>>>>>> but it's possible to keep everythin in the separate module and not
>>>>>> need
>>>>>> the include of tests/xe_rtp.c. I did that on v2.
>>>>>>
>>>>>> Lucas De Marchi
>>>>>
>>>>> From my perspective - the way that the test is implemented carries an
>>>>> additional information.
>>>>> 1) Test is a separate module:
>>>>> Means that we're testing the API exported by the module (that's used by
>>>>> other modules).
>>>>> 2) Test is a file that's a part of a module:
>>>>> Means that we're testing non-static API used internally by the module.
>>>>> 3) Test is a file that's included at the end of a file:
>>>>> Means that we're testing static functions.
>>>>>
>>>>> Forcing everything to be a module doesn't really serve any purpose.
>>>>> It just makes things harder to follow, as it removes this useful
>>>>> additional bit of information.
>>>>
>>>> I strongly disagree here.  Basing the test location on how much a
>>>> function is exposed is a recipe for having to move the tests from one
>>>> place to another. There are always refactors that make function change
>>>> from non-static -> static, static -> non-static, and to a smaller degree
>>>> exporting it.
>>>>
>>>> In the end the code will just be inconsistent with the rules you
>>>> outlined.
>>>>
>>>> Separating everything to a separate module means the test code doesn't
>>>> bloat the xe.ko module. I do think we should group all the tests into a
>>>> single xe_test.ko, but that is a different topic.
>>>>
>>>> See v2 of this patch series where I move *all* of the rtp tests to the
>>>> separate module and simply use EXPORT_SYMBOL_IF_KUNIT() to export the 2
>>>
>>> but having to export static functions simply don't scale, it's fine if
>>> all you test needs is 2 functions, but to have full coverage we might
>>> end with exporting all static functions (and to some extend bloating
>>> xe.ko anyway, both in code and binary)
>>
>> ... the fallacy of full code coverage.
>
>so having full coverage is not our goal any more ?
>then others work like [2] seems unnecessary
>
>[2] https://patchwork.freedesktop.org/series/115513/

no idea about that test - I'm talking about the concept about full
*code* coverage above all else and think you are covered when you
reached that goal... ther are multiple articles around about that.

>
>>
>>>
>>> there is already BKM for testing static functions [1]
>>> so why are we trying to force different (and IMO worst) model ?
>>
>> it's fine as long as the test is small and you really want/need to test
>> that single function rather than a higher order interface to it.
>>
>>>
>>> [1]
>>> https://docs.kernel.org/dev-tools/kunit/usage.html#testing-static-functions
>>
>> "If we do not want to expose functions or variables for testing ...".
>> There are several sections before that one.
>>
>> It will depend on the test. Some may be inevitable and simpler to
>> include in the file. With the dowside of bloating the module.
>
>but what exactly bothers you by "bloating the module"
>
>- additional single #include at end of .c file
>- multiple xxx_IF_KUNIT macros across all over .c file
>- bigger footprint of the xe.ko due to export tables
>- bigger footprint of the xe.ko due to inclusion of test functions
>- bigger footprint of the xe.ko due to inclusion of test suites
>- additional headers to expose test functions
>- ...
>
>>
>>>
>>>> functions I need (In this v1 I was using
>>>> EXPORT_SYMBOL_NS_GPL(..., XE_KUNIT) to make sure the tests were in a
>>>> separate namespace, but then I noticed kunit itself has a recommended
>>>> way by using its macro).
>>>>
>>>> This makes the rule very simple to be followed:
>>>>
>>>> 1) Place everything you can in the separate test module
>>>
>>> but why? as Michal pointed out, this works best for testing already
>>> exported API functions, to test internal static functions, this would be
>>> unnecessary complicated solution
>>
>> then provide a better test on the interface rather than on the static
>> function. If really needed/wanted, then including it on the bottom of
>> the file can be done.
>
>majority (if not all) of our functions are not module exported, thus
>testing them will look almost exactly as static

I'm saying you could have good tests by using the interface of the
module/file instead of trying to test every single static function.

>
>>
>>>
>>>> 2) Place it in end of the file with the #include otherwise
>>>
>>> this should be our primary BKM for mock/fake/sw/self/kunit/unit(*) test.
>>>
>>> (*) pick one
>>>
>>> note that in addition to easy maintenance (it doesn't require writing,
>>> building, distributing and loading separate modules), this single test
>>> file already has access to all required compilation unit static
>>> functions that we want to test.
>>>
>>> also from IGT point of view, this would mean that in case of
>>> adding/removing some subtests all we need is just update test filter
>>> rather then updating list of modules to load/unload.
>>
>> you are making this up as "it's harder to maintain" to justify your bias.
>> There is 1 single call you make, using the already provided kunit.py.
>> Developers can run it on their machines and so can CI.
>
>it's not about how you run such test, as in both cases this will be
>exactly the same kunit.py line
>
>problem is during creation of such test where you need to:
>- provide new .h with prototypes for the otherwise static functions
>- tag (bloat) static functions with VISIBLE_IF_KUNIT
>- export (bloat) static functions with EXPORT_SYMBOL_IF_KUNIT
>- create new .c with test module

I don't think it was hard to write the v2 of this series and the
boilerplate was very minimal. So I'm ok with the current state.

>- update Makefile with new module definition

as I said before, this would be a good thing to consolidate, making all
tests under a single .ko rather than 1 per area.... it's on my todo list

>- update IGT to consider new test module

see above about 1 test module... but also it seems igt would reinvent
kunit.py with the intergration you have in mind for it.

IGT could simply have 1 test: kunit. That runs kunit.py with the rigt
args, get the json output and interpret it.

>- update CI to run new module
>- update any scripts to load/unload new module
>- ..

no, see above

>
>
>>
>>> and I wouldn't count this final #include as .ko bloating factor as it
>>> would be primary used in UML build, where we don't care, and for the
>>> target builds this could be easily compiled out.
>>
>> then you can't get the prebuilt kernel to be executed "in production"
>> and sanity check it still passes the sw tests.
>
>but really why you need to rerun those tests ?
>if you don't trust results from kunit.py UML run, then maybe

CI / distro / whatever built a x86-64 / ARM64 / whatever-!UML kernel.
User downloads *that* kernel. How can he sanity check the sw tests if
we make the tests to run strictly under UML?

>
>a) we should drop it from our build step
>b) change kunit config from UML to x64 or something
>c) notify kunit maintainers about problems/gaps with UML runs
>
>as I don't expect any differences between UML and 'production' build
>then I can see no point in running them twice or more

answered above.

Also, it doesn't seem this is being productive as you are convinced the
current state of test integration is completely broken and can't be
used. I disagree. And I see that the changes you are proposing would
bring other issues or are simply not worth it. I'd rather spend time
adding more tests with the current state of the integration and improve
the integration as needed.  So... I already sent v2 of this patch series
which also covers some of the things discussed and I'd rather not reply
to this v1 series anymore.

thanks
Lucas De Marchi

>
>thanks,
>Michal
>
>>
>> Lucas De Marchi
>>
>>>
>>> thanks,
>>> Michal
>>>
>>>>
>>>> Lucas De Marchi


More information about the Intel-xe mailing list