[PATCH] XKB: Remove component listing support

Peter Hutterer peter.hutterer at who-t.net
Sun Nov 4 21:46:01 PST 2012

On Mon, Nov 05, 2012 at 03:56:32PM +1100, Daniel Stone wrote:
> Hi,
> On 2 November 2012 14:37, Peter Hutterer <peter.hutterer at who-t.net> wrote:
> > can you covert your test to a XIT test and add it to (a newly created)
> > tests/server/xkb?
> > http://cgit.freedesktop.org/~whot/xorg-integration-tests/
> Are these tests ever supposed to work with anything other than master? I
> wrote a test which I'll send to the list, but if I push that, the tests are
> going to fail on literally every released version of the server ever, which
> doesn't sound particularly useful.

No, there is no server version that passes all tests. I have a bunch of
tests in there that are known bugs that are not fixed yet.

The tests _cannot_ be written to be happy, you'd end up with an indefinite
number of ifdefs in there (the RHEL6 ones in there already show that there's
no future to that). What will be needed on any actual distributed server
version is that you need to know _which_ tests work on the server version
and which ones are supposed to fail.

What is needed here is a registry that at least notes which tests are
expected to fail. I have some ideas for that, but no code yet. Some tests
already link to the bug, or patchwork, or commit number when they fail.
That's a start, but not sufficient yet.

> Is there much of a compelling reason for not having the tests in-tree with
> the server? 

yes. we talked about that when Chase proposed them (1.12 cycle) and unlike
the current tests the XIT do require a full install. Which means we can't
make it as part of the server's make check. And the churn the XIT go through
atm makes them unsuitable with the review process.

> What's the actual split between xorg-gtest and
> xorg-integration-tests? 

historical. xorg-gtest was what Chase started with. XIT was something I
started based on xorg-gtest that originally was internal for RHEL6 testing.
Apparently xorg-gtest is used for some unity tests, so to avoid too much of
a XIT-focused API the split still exists. Which appears painful at first
(and second, and third) but I have to say that hadn't it been for the split,
xorg-gtest's API would be much worse than it is.

If someone can actually spec out a proper API that doesn't need to break
often on first go, then this may be a nonissue. the way XIT tests were
added, this wasn't an option.

> Why does the former have a couple of random tests
> and a device declaration file if it seems to mainly just be a framework? 

Did you look at the "random" tests? They're there to test xorg-gtest itself. 

> Is
> there much use in having the split, given how codependent they seem to be?

I think so, even if it were more convenient to merge them.

> Also, it relies on evemu, which isn't packaged in Debian, which is kind of
> irritating.  (Also, evemu's autogen.sh isn't actually the standard X.Org
> autogen.sh, and doesn't run ./configure for you.  And it's on Launchpad,
> using Bazaar ... I die.  For bonus points, it's even licensed under GPLv3,
> which means a hell of a lot of companies will not go near it no matter
> what, and probably just skip the integration tests.)

evemu was written by Henrik Rydberg originally. He worked for Canonical,
which is why the whole thing is in launchpad and bzr. I was closed to
forking it but once I added the few things needed I could largely ignore it
since it works fine now.

However, whichever way you go, it is _not_ an X.Org project. Henrik
originally wrote it for kernel driver testing, iirc. If you want to rewrite
it so it complies with your preferred license, or you want to maintain a
fork somewhere, be my guest. Happily for me, I don't have a problem with
GPLv3 and my employer has neither.

It's a tool, it's needed for input device emulation. Unless something better
comes along, we'll use it, just like we use gcc, autotools, etc.

> With all the linking of massive chunks of C++, it's also crazy slow to
> build: I can autoreconf, configure, make, and make check Weston and its
> sample clients from scratch at least four times over in the time it takes
> to just build the tests, let alone run them.  Probably closer to five or
> six.  

We don't have a lot of choice on the linking, we need to compile from
source. We can probably go for a static library once in XIT but even then we
need to be careful of compiler flags.

googletest has it's downsides and quirks, but so far pretty much every
feature I needed from a test suite ended up being there. I'm not fond of
implementing a test framework given how much time xorg-gtest and XIT's base
classes already took.

And unless my view of the Weston tests is really really outdated, you're
joking if you compare make check in Weston with the tests run here. They're
not the same, they don't even try to be the same.

> When I do get to run them, the Synaptics tests (massively valuable,
> don't get me wrong) all interfere with my running server, presumably
> because it doesn't know to ignore a test device. 

Devices created by the tests will look like physical devices to the server,
so the running server will add them. You can probably write a udev rule that
applies a tag to all such devices so that your real server ignores it. 

>From the README:
"Most tests start up some X server, so it is advisable to shut down any X
server on the test box. Some tests only start Xdummy, but do add
virtual input devices that may mess with the current session. Again, it is
advisable to shut down any X server on the test box."

> That's something that'd
> be really nice to have a flag for.  The Wacom tests are all failing
> (presumably due to not having the driver installed), but between the
> two-second delay for server starting and five-second device wait, are
> taking seven seconds a pop.  There's forty-eight of those tests, which
> makes it just over five and a half minutes.

2 second delay for server starting? That doesn't sound right and suggests a
bug. Even on the VMs I run most tests on I use only slightly more than a

As mentioned on IRC, googletest has no facility for disabling tests at
runtime or dependency between tests. However, you can add code to the base
class that detects the lack of the driver you're testing and then fail all
other tests.

> I also now have a bajillion files directly under /tmp.

XIT's base class is written to leave the files around if a test fails so you
can inspect them to figure out why it fails. This is a feature.

you can adjust /tmp to your preferred directory at configure time

> The keyboard tests all failed because the dummy display driver
> configuration didn't get written out, so it tried to spawn a new Intel
> server, only to die when it realised it couldn't become a DRM master.

the keyboard tests don't use the dummy driver, the driver doesn't get events
if the server is run with the dummy driver.

> Sorry if this comes across as kind of harsh - I really, really like the
> idea, but it comes across as pretty irritating to actually write the tests.
>  Take this with a grain of salt since I barely touch X these days, but it
> just seems way too clunky become part of my regular workflow.  It's all
> fixable, mind (with the possible exception of the massive chunks o' C++),
> but the out-of-the-box experience isn't much fun.
It's rough, I know. It's only a couple of months old and most of that time
was spent writing tests and getting xorg-gtest and the base classes up to
speed. It does do a few goodies that you may not have noticed yet which
become apparent once you use it longer. The ability to feed test results
into jenkins for example which at least internally has proven very useful.

Note also that this isn't a test suite where you just copy/paste code and
you're happy. some knowledge of how it all fits together is needed. Did you
read the README and HACKING documents? (the latter may have been added after
you cloned) Can you add to them to make the pain less for the next person?

> > > -    list.pattern[_XkbListKeycodes] = GetComponentSpec(&str, FALSE,
> > &status);
> > > -    list.pattern[_XkbListTypes] = GetComponentSpec(&str, FALSE,
> > &status);
> > > -    list.pattern[_XkbListCompat] = GetComponentSpec(&str, FALSE,
> > &status);
> > > -    list.pattern[_XkbListSymbols] = GetComponentSpec(&str, FALSE,
> > &status);
> > > -    list.pattern[_XkbListGeometry] = GetComponentSpec(&str, FALSE,
> > &status);
> > > -    if (status != Success)
> > > -        return status;
> > > +    for (i = 0; i < _XkbListNumComponents; i++) {
> > > +        size = *((uint8_t *)str);
> > > +        str += (size + 1);
> > > +    }
> >
> > please add a short comment that str is a set of pascal-style strings so
> > we're really just skipping over the content of the string here. it's not
> > really obvious for the unsuspecting reviewer.
> >
> Sure, I've done this now.  I've also changed the 5 to 6: turns out
> that 1ad80678 (committed 2009) broke this function anyway.  It shifted all
> the component masks up one (so your keycode spec was actually the keymap
> spec, types were keycodes, etc), and it'd also give you BadLength unless
> you specified * for everything, or otherwise managed to have your length
> such that the length of keymap string spec + 1 fit into the four-character
> padding.
> > I suspect if anyone was really using this instead of the xml, we won't hear
> > of it until this feeds into RHEL or a similar distro...
> >
> Given that's the second problem I've found with this code, I doubt it'll be
> too much of an issue.
> > anyway, with the comment:
> > Reviewed-by: Peter Hutterer <peter.hutterer at who-t.net>
> >
> Thanks, resent if you want to pull into your tree.

Thanks, will pull once they arrive her.


More information about the xorg-devel mailing list