[Xcb] xcb/test: I propose to base it on cppunit

Christian Linhart evidence-elim at clinhart.com
Sat Apr 1 05:34:05 UTC 2017


Hi Eric and all,

Thank you for describing piglit.
I also think that piglit is an overkill for the xcb-test.
And the lack of API-stability is suboptimal.[1]

I think assert-based is a good direction, but cppunit improves that in a lightweight and easy-to-use way:
The CPPUNIT_ASSERT does not terminate the program but just outputs an error message, and continues running the subsequent tests.
You can run the whole testsuite as a single executable, and select individual testcases etc.
There are GUI-based and text-based user interfaces for the same test-suite.
Also, there's no mix of programming languages, which makes stuff easier in general.  (Everything is C/C++.)

Good point about the need for data comparisons.
I guess that an easy way would be to just call a comparison function in a cppunit assert:
like: CPPUNIT_ASSERT( IsPixmapEqualToPNG(pixmap, "test123-ref1.png") );

Chris

[1] We need to think economical here, too.
Everybody's time is valuable, and when we are more efficient we can write more testcases or improve xcb.

On 2017-03-31 22:25, Eric Anholt wrote:
> Christian Linhart <chris at DemoRecorder.com> writes:
>
>> Hi Eric and all,
>>
>> I have thought a bit about xcb/test.
>> I suggest the following:
>>
>>   * do xcb/test as a fresh project ( i.e. not clone xcb/demo)
>>   * And use a regression test framework for it.
>>     I propose to use cppunit. ( https://cgit.freedesktop.org/libreoffice/cppunit/ )
>>   * As far as possible, the tests should run and check themselves automatically without manual intervention needed.
>>     Ideally, we'll succeed to have everything running automatically, of course.
>>
>> What do you think?
>>
>> Chris
>>
>> P.S.: Thank you for the info for how to start the process of creating a new project.
>> I'll do that soon.
> For running the X Test Suite and rendercheck, I'm using piglit as a
> testing framework.  What you get from that is the ability to write some
> python to generate a list of test binaries to be run, a framework that
> executes them, collects their results, and lets you compare results
> between runs (including image captures, for rendering tests).
>
> Comparison is of more relevance to GL implementations where you may not
> ever pass all your tests, than to unit tests for things like the XCB or
> the X Server where we intend to actually be entirely correct according
> to our tests.  The downside is that the piglit framework moves pretty
> fast and our X stuff frequently gets broken.
>
> For smaller tests, I've been basically satisfied with just bare binaries
> with assert()s, like the X Server's test/misc.c.
>
>
> _______________________________________________
> Xcb mailing list
> Xcb at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/xcb


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/xcb/attachments/20170401/136dcde3/attachment.html>


More information about the Xcb mailing list