[Libreoffice-qa] Test Structure in Litmus

Rimas Kudelis rq at akl.lt
Thu Nov 17 01:06:43 PST 2011


Hi all, and note I'm on the list now. :)

2011.11.16 20:12, Petr Mladek rašė:
> Petr Mladek píše v St 16. 11. 2011 v 15:31 +0100:
>> Rimas pointed out that testers define their platform when entering a "
>> test run". It actually affects the statistics. The number of finished
>> test cases is counted separately for each platform
> Rimas found that also "build id#" and "locale" affects the number of
> finished test cases at https://tcm.documentfoundation.org/run_tests.cgi
>
>
> Why is it a problem?
>
>
> 1. problem with "build id#"
> ===========================
>
> Imagine the following scenario:
>
> 	1. create test run for 3.5.0
> 	2. people enter the build id "3.5.0-beta1"
> 	3. they do some tests and the result is:
> 		+ 100% of P1 tests finished
> 		+ 100% of P2 tests finished
> 		+ 20% of P3 tests finished
> 	4. beta2 is available => people enter build id "3.5.0-beta2
>
> 	Result: They will see:
>
> 	 	+ 0% of P1 tests finished
> 	 	+ 0% of P2 tests finished
>          	+ 0% of P3 tests finished
>
> 	Expected Result:
>
> 		+ 100% of P1 tests finished
> 		+ 100% of P2 tests finished
> 		+ 20% of P3 tests finished
>
> By other words, people will start the testing from the beginning with
> beta2. They will never tests more complicated scenarios (P3, P4 stuff)
>
> Is this what we want?
>
> I prefer to do deep testing during the beta phase => we should not
> restart it with every beta => we should continue where we ended with
> previous beta => the "build id#" must not affect the number of finished
> test cases
>
> Solution (by Rimas):
> --------------------
>
> Remove "build id#" from the UI, use the value 'UNUSED' in the database.
> The real version is defined in the test run name.

The fact that it is a problem was a bit of a surprise for me actually. I
thought there would be at least a few people who'd go through ALL tests
for each build that needs testing. Since I hope that the number of
testers might grow in future, or some other circumstance might change, I
suggested to hide the Build ID field instead of fully dropping it,
leaving the possibility to reverse this at some later point.

> 2. problem with locale:
> =======================
>
> Imagine the following scenario:
>
> 	1. one person do test run in "de" locale; the result is:
> 		 + 25% P1 functional (lang-independent) tests finished
>                  + 25% P1 l10n (lang-dependent) tests finished
> 	2. other person start test run in "fr" locale
>
> Result:
>
> 	+ "fr" person see:
> 		+ 0% P1 functional tests finsihed
> 		+ 0% P1 l10n tests finished
>            => does all tests again
>
> Expected Restult:
>
> 	+ "fr" person see:
> 		+ 25% P1 functional tests finished
>            	+ 0% P1 l10n tests finished
> 	   => continue with other functional tests and repeat the l10n
>               tests
>
> By other words, the functional tests are duplicated inside one test run
> for each locale; the l10n tests are currently duplicated even twice
> (once by the groups, once by the locale in the test run)

Sorry, I don't quite get this paragraph above. I think I get the problem
from our chat yesterday however.

> Possible solutions by Rimas a me:
> ---------------------------------
>
> 1. Have two separate test runs (branches) for functional tests and l10n
>    tests. Ask people to always use "en" locale for functional tests
>    group.
>
>    Advantages:
>
> 	+ easy to implement
> 	+ close to what we have now
>
>    Disadvantages:
>
> 	+ "locale" setting might be used to select localization of the
>            test case text => people would be forced to see functional
>            tests in English locale

That's not true ATM, since the language the tests are shown in has no
relation to the language as of yet.

> 	+ many QA people do not know English; they might be discouraged
>           to do the biggest group of functional tests
> 	+ non-intuitive solution; people need to follow an ugly rule
>           defined somewhere

I think it may be possible to extend Litmus to allow restricting
testruns by locale, and leave English as the only possible locale to run
functional tests on. That would still be a workaround though...

> 2. Remove the locale setting in the "run tests" dialog and ignore it as
>    we suggest to ignore the "build id". Note that the l10n tests are
>    duplicated in the subgroups:
>
>    Advantages:
>
> 	+ easy to implement
> 	+ close to what we have now
> 	+ the l10n tests are localized without hacking Litmus server
>           code
> 	+ allows to create extra l10n test case for a particular
>           language (is it an advantage? creates a mess?)
>
>    Disadvantages:
>
> 	+ it might be hard to maintain the l10n tests because you need
>           to monitor changes in the "en" group
> 	+ people will see l10n test cases also for another languages
> 	+ it will be hard to see how many l10n test cases were finished
>           in the various localizations; you would need to enter the
>           "run tests" dialog with different setting
> 	+ still not fully intuitive solution; people are mixed when they
>           see test cases for other localizations


This is indeed very close to what we have now and could be considered as
a possible temporary workaround. It doesn't sound nice in the long-term
though.

> 3. Do some more changes in Litmus (suggested by Rimas):
>
>    a) add extra checkbox into the test case edit dialog (os somewhere);
>       it will mark the test case as language specific or language
>       independent
>    b) count the statistic of finished test cases according to the
>       check box; "locale" will be ignored for language-independent
>       tests;
>    c) allow to transparently localize test cases => you will see
>       different text in different locales (can be done later)
>    d) show statistic of finished l10n tests per locale on a single page
>       (can be done later)
>
>    Advantages:
>
> 	+ clear solution
> 	+ it is on the way where we want to go, see
>           http://wiki.documentfoundation.org/Litmus_TODO
> 	+ will help to keep l10n tests in sync
>
>    Disadvantages:
>
> 	+ needs hacking in litmus (developer and time)
>
>
>
> My opinion:
> -----------
>
> I very like the 3rd proposal (created by Rimas). I think that it is
> worth to spend some time with hacking Litmus. We will profit from this
> in the future a lot.
>
> Rimas, what do you think about it?
> Would you have time and appetite to look into it?

I of course agree that it's the cleanest solution. But I'm not sure how
much time and skill I'll have to implement this. In any case, I think
I'll try to at least add that checkbox rather sooner than later, that
would be a good start  already. :)

I have one small question lingering on my mind though: is it better to
add that checkbox to testcases or some higher hierarchical component
(subgroup, group?) I'm quite sure a testcase makes most sense, but just
want to check with you guys.

Rimas


More information about the Libreoffice-qa mailing list