[Libreoffice-qa] Test Structure in Litmus

Yifan Jiang yfjiang at suse.com
Thu Nov 17 21:17:40 PST 2011

Hi Rimas,

On Thu, Nov 17, 2011 at 11:06:43AM +0200, Rimas Kudelis wrote:
> > 	+ many QA people do not know English; they might be discouraged
> >           to do the biggest group of functional tests
> > 	+ non-intuitive solution; people need to follow an ugly rule
> >           defined somewhere
> I think it may be possible to extend Litmus to allow restricting
> testruns by locale, and leave English as the only possible locale to run
> functional tests on. That would still be a workaround though...

Yes. In the context of our testing method, the English mentioned here can be
actually "Any" language. But I suspect it will be easier than Option 2 to
design and implement in Litmus code :)

So by this method,  we will have:

    - L10n Test Run for all locales

        in which each test case canbe considered as fully tested only if it
        has been run in all locales.
    - Functional Test Run for any of the locales

        in which each test case canbe considered as fully tested if only it
        has been run in any of the locales.
But as Petr mentioned the option 1 may discourage, at least to some extent,
people who does not know English. Besides option 1 brings us back to the style
of testrun dividing. So for the choice of workaround, I prefer option 2.

> > 2. Remove the locale setting in the "run tests" dialog and ignore it as
> >    we suggest to ignore the "build id". Note that the l10n tests are
> >    duplicated in the subgroups:
> >
> >    Advantages:
> >
> > 	+ easy to implement
> > 	+ close to what we have now
> > 	+ the l10n tests are localized without hacking Litmus server
> >           code
> > 	+ allows to create extra l10n test case for a particular
> >           language (is it an advantage? creates a mess?)
> >
> >    Disadvantages:
> >
> > 	+ it might be hard to maintain the l10n tests because you need
> >           to monitor changes in the "en" group
> > 	+ people will see l10n test cases also for another languages

> > 	+ it will be hard to see how many l10n test cases were finished
> >           in the various localizations; you would need to enter the
> >           "run tests" dialog with different setting

Just found an alternative way is "Reporting -> test runs" can show it more
easily. Instead of percentage of finishing, detailed execution record
of each test case in group/subgroups are summarized there.

> This is indeed very close to what we have now and could be considered as
> a possible temporary workaround. It doesn't sound nice in the long-term
> though.
> > 3. Do some more changes in Litmus (suggested by Rimas):
> >
> >    a) add extra checkbox into the test case edit dialog (os somewhere);
> >       it will mark the test case as language specific or language
> >       independent
> >    b) count the statistic of finished test cases according to the
> >       check box; "locale" will be ignored for language-independent
> >       tests;
> >    c) allow to transparently localize test cases => you will see
> >       different text in different locales (can be done later)
> >    d) show statistic of finished l10n tests per locale on a single page
> >       (can be done later)
> >
> >    Advantages:
> >
> > 	+ clear solution
> > 	+ it is on the way where we want to go, see
> >           http://wiki.documentfoundation.org/Litmus_TODO
> > 	+ will help to keep l10n tests in sync
> >
> >    Disadvantages:
> >
> > 	+ needs hacking in litmus (developer and time)
> >
> >
> >
> > My opinion:
> > -----------
> >
> > I very like the 3rd proposal (created by Rimas). I think that it is
> > worth to spend some time with hacking Litmus. We will profit from this
> > in the future a lot.
> >
> > Rimas, what do you think about it?
> > Would you have time and appetite to look into it?
> I of course agree that it's the cleanest solution. But I'm not sure how
> much time and skill I'll have to implement this. In any case, I think
> I'll try to at least add that checkbox rather sooner than later, that
> would be a good start  already. :)

Rimas, thanks a lot for doing them! :)

> I have one small question lingering on my mind though: is it better to
> add that checkbox to testcases or some higher hierarchical component
> (subgroup, group?) I'm quite sure a testcase makes most sense, but just
> want to check with you guys.

IMHO, attach it to test case is the most safe way from the design's view,
which gives us the most flexibility of expanding the system.

But in UI level, subgroup maybe a better place because the test cases will be
created by different people, some of whom may not notice the usage of the
checkbox for each test case. Once the checkbox is checked in an inappropriate
manner, it will be hard to manage and the test statistics is potentially
calculated in a wrong(or tricky) way. Keeping the checkbox in a subgroup could
be more controlable because of the stability of subgroups contents and
structure. Test case authors would only need to put their new test cases in
the correct subgroup like we are doing now and nothing else.

A potential problem is if we will have mixed language (in)dependent test cases
in a same subgroup? It seems not likely to happen at least from our current
testing strategy.

How do you think?

Best wishes,

More information about the Libreoffice-qa mailing list