[PATCH 0/3] Resubmit - Unit test framework for Wayland

Andreas Ericsson ae at op5.se
Fri Mar 2 13:53:33 PST 2012


On 03/02/2012 07:07 PM, Eoff, Ullysses A wrote:
>> -----Original Message-----
>> From: hoegsberg at gmail.com [mailto:hoegsberg at gmail.com] On Behalf Of
>> Kristian Høgsberg
>> Sent: Friday, March 02, 2012 7:33 AM
>> To: Michael Hasselmann
>> Cc: Eoff, Ullysses A; wayland-devel at lists.freedesktop.org
>> Subject: Re: [PATCH 0/3] Resubmit - Unit test framework for Wayland
>>
>> On Fri, Mar 2, 2012 at 4:36 AM, Michael Hasselmann
>> <michaelh at openismus.com>  wrote:
>>> On Thu, 2012-03-01 at 22:34 -0500, Kristian Høgsberg wrote:
>>>> Hi Artie,
>>>>
>>>> Thanks for starting this.  Looks good and certainly when we start
>>>> adding tests for some of the more complex objects and data structures
>>>> in the library (wl_map would be a good next step), it will be a good
>>>> way to avoid regressing functionality.  I'm not convinced that we
>>>> really need an external unit testing framework though. All each TESTS
>>>> binary need to do is to test something and fail or succeed.  We can
>>>> add a little test helper to provide the fail_if() etc functions.
>>>
>>> I found that a good testing framework can lower the barrier of writing
>>> useful tests. Nice logging and status reports are important I feel. And
>>> if for example you can easily write data driven tests, then testing all
>>> possible code paths in a critical area becomes straight-forward and the
>>> tests will remain readable (and with that, maintainable; people tend
>>> forget that every test also adds to the overall maintenance costs which
>>> can often enough outweight the benefits of the test). It's actually the
>>> one thing I really like about Qt's testlib.
>>
>> I'm just not convinced that check does any of that.  It's an extra
>> dependency on a little-known library for a project that's already too
>> hard to build for many people.
> 
> Check is used successfully in quite a few projects (gstreamer, gnupdf, ...).
> It's also available through several major distro package managers (Fedora,
> Ubuntu, ...).  So I don't think it's a "little-known" library or hard to get.
> 
>> All I see in the patch is that the
>> test case code flow is obfuscated by setting up test-suites and other
>> boiler plate.  Keeping the barrier to writing test cases low to me
>> means that is should be transparent and obvious what's going on and
>> that the required boiler plate code is kept to a minimum.  So no, not
>> convinced.
>>
> 
> It wasn't obvious or transparent to write tests?  How so?
> 
>> Kristian
>>
>>> If a testing framework even goes so far to allow fuzz testing out of the
>>> box, then you have a nice sanity check for your APIs.
>>>
>>> I have no experience in writing a testlib, but I would assume it's a
>>> non-trivial task.
>>
>> If you're writing a do-it-all, general purpose test framework, perhaps
>> (check is only 2000 lines of code though).  If we just organically
>> grow our test helpers as we need new functionality, not so much.
>>
> 
> I think growing our own framework will be more maintenance in the
> long run.  Check is small yet provides a lot of useful functionality that
> is abstracted away from developers so they can focus on writing tests
> instead of all the gory details of setting them up.
> 
> If anyone knows of a better existing unit test framework, then please
> share.
> 

I've written a few in my days. Normally, I keep them ridiculously
simple, so the testing code looks something like this (sorry for the
sucky indentation; coding in a mua is always crap):

some_test_func(args)
{
	test_suite t;
	int x, y;

	x = 5;
	y = x;
	test(&t, x == y, "x(%d) and y(%d) should be equal", x, y);
	stest(&t, x == y); /* would print "fail: x == y evaluated as FALSE" */
	end_tests(&t); /* would print "OK: %d/%d tests passed */
}

The tests naturally return -1 (or TEST_FAIL) on failures and 0 (or
TEST_OK) on success, which makes it relatively easy to bail out of
a testsuite in case setup goes wrong or an expected return value
isn't given so further testing is more or less pointless.

Personally, I tend to prefer getting coredumps when tests go wrong,
since it means I can inspect the exact state of the API at the time
the test failed whenever I like, and there are already awesome
programs for assisting with corefile debugging.

I also have a small and ridiculously simple profiler thing which just
takes a delta of two timeval structs which can be used to make sure
tests run in a timely manner.

The reason I find basic helpers more helpful than full frameworks is
that anything even remotely complex will require setup (that should
be tested) as well as the actual unit tests, or one will have to
write mockups of pretty much everything, which is timeconsuming and
leads to faked tests that hardly test the thing it's supposed to be
testing at all.

I could polish it up a bit and submit for inclusion if you like. The
longest such version I have (yes, I modify it to match each project
I work on) has support for subsuites and skipping tests and checks in
at around 800 lines, iirc. It prints in colors too, so yay ;)

-- 
Andreas Ericsson                   andreas.ericsson at op5.se
OP5 AB                             www.op5.se
Tel: +46 8-230225                  Fax: +46 8-230231

Considering the successes of the wars on alcohol, poverty, drugs and
terror, I think we should give some serious thought to declaring war
on peace.


More information about the wayland-devel mailing list