[PATCH v3 01/10] clk: Add Kunit tests for rate

Daniel Latypov dlatypov at google.com
Fri Jan 21 05:25:03 UTC 2022


On Thu, Jan 20, 2022 at 8:34 PM Stephen Boyd <sboyd at kernel.org> wrote:
>
> Quoting Daniel Latypov (2022-01-20 13:56:39)
> > On Thu, Jan 20, 2022 at 1:31 PM Stephen Boyd <sboyd at kernel.org> wrote:
> > KUnit doesn't have hard technical limitations in this regard.
> >
> > You could have something like this
> >
> > static void my_optional_kunit_test(struct kunit *test)
> > {
> > #ifdef CONFIG_OPTIONAL_FEATURE
> >
> > # else
> >   kunit_skip(test, "CONFIG_OPTIONAL_FEATURE is not enabled");
> > #endif
> > }
> >
> > I think it's just a matter of what's least confusing to users.
>
> Ok, I see. Is there some way to have multiple configs checked into the
> tree so we can test different kernel configuration code paths? This

Multiple kunitconfigs?
There's no restrictions on those

$ ./tools/testing/kunit/kunit.py run --kunitconfig=drivers/clk
$ ./tools/testing/kunit/kunit.py run --kunitconfig=drivers/clk/kunitconfig.foo
$ ./tools/testing/kunit/kunit.py run --kunitconfig=drivers/clk/kunitconfig.bar

The first one will assume drivers/clk/.kunitconfig.
But there's no reason you need to have a file called that.
One could just have multiple standalone kunitconfigs, named however they like.

--kunitconfig is new enough (5.12+) that there's no real conventions yet.

Another option is
$ ./tools/testing/kunit/kunit.py run --kunitconfig=drivers/clk \
   --kconfig_add=CONFIG_RARELY_USED=y

This is another case where we can do whatever is least confusing.

> discussion isn't really relevant to this patch so we can take this up in
> another thread if you like.
>
> >
> > >
> > > Maybe kunit should check that there was an EXPECT on return from the
> > > test. Daniel?
> >
> > Sorry, I'm not sure I understand the question.
> >
> > Are you saying you want kunit to flag cases like
> >   static void empty_test(struct kunit *) {}
> > ?
>
> Yes. I'd like kunit to enforce that all tests have at least one
> EXPECT_*() in them.

I totally understand the rationale.
It's a bit misleading to say PASSED if no expectation/assertion passed.
One might want a NO_STATUS (or maybe SKIPPED) result instead.

But other unit test frameworks act the way KUnit does here, so there's
an argument for consistency with others so users don't have to have a
whole new mental model.

Some examples below for reference.
All of these output something like
  test result: ok. 1 passed; ...

E.g. in Python
import unittest

class ExampleTest(unittest.TestCase):
  def test_foo(self):
    pass

if __name__ == '__main__':
  unittest.main()

In Golang:
package example

import "testing"

func TestFoo(t *testing.T) {}

In C++ using Googletest:
#include "gtest/gtest.h"

TEST(Suite, Case) { }

In Rust:
#[cfg(test)]
mod tests {
    #[test]
    fn test_empty() {}
}


More information about the dri-devel mailing list