[Libreoffice-qa] Test Structure in Litmus

Petr Mladek pmladek at suse.cz
Wed Nov 16 10:12:23 PST 2011


Petr Mladek píše v St 16. 11. 2011 v 15:31 +0100:
> Rimas pointed out that testers define their platform when entering a "
> test run". It actually affects the statistics. The number of finished
> test cases is counted separately for each platform

Rimas found that also "build id#" and "locale" affects the number of
finished test cases at https://tcm.documentfoundation.org/run_tests.cgi


Why is it a problem?


1. problem with "build id#"
===========================

Imagine the following scenario:

	1. create test run for 3.5.0
	2. people enter the build id "3.5.0-beta1"
	3. they do some tests and the result is:
		+ 100% of P1 tests finished
		+ 100% of P2 tests finished
		+ 20% of P3 tests finished
	4. beta2 is available => people enter build id "3.5.0-beta2

	Result: They will see:

	 	+ 0% of P1 tests finished
	 	+ 0% of P2 tests finished
         	+ 0% of P3 tests finished

	Expected Result:

		+ 100% of P1 tests finished
		+ 100% of P2 tests finished
		+ 20% of P3 tests finished

By other words, people will start the testing from the beginning with
beta2. They will never tests more complicated scenarios (P3, P4 stuff)

Is this what we want?

I prefer to do deep testing during the beta phase => we should not
restart it with every beta => we should continue where we ended with
previous beta => the "build id#" must not affect the number of finished
test cases

Solution (by Rimas):
--------------------

Remove "build id#" from the UI, use the value 'UNUSED' in the database.
The real version is defined in the test run name.



2. problem with locale:
=======================

Imagine the following scenario:

	1. one person do test run in "de" locale; the result is:
		 + 25% P1 functional (lang-independent) tests finished
                 + 25% P1 l10n (lang-dependent) tests finished
	2. other person start test run in "fr" locale

Result:

	+ "fr" person see:
		+ 0% P1 functional tests finsihed
		+ 0% P1 l10n tests finished
           => does all tests again

Expected Restult:

	+ "fr" person see:
		+ 25% P1 functional tests finished
           	+ 0% P1 l10n tests finished
	   => continue with other functional tests and repeat the l10n
              tests

By other words, the functional tests are duplicated inside one test run
for each locale; the l10n tests are currently duplicated even twice
(once by the groups, once by the locale in the test run)


Possible solutions by Rimas a me:
---------------------------------

1. Have two separate test runs (branches) for functional tests and l10n
   tests. Ask people to always use "en" locale for functional tests
   group.

   Advantages:

	+ easy to implement
	+ close to what we have now

   Disadvantages:

	+ "locale" setting might be used to select localization of the
           test case text => people would be forced to see functional
           tests in English locale
	+ many QA people do not know English; they might be discouraged
          to do the biggest group of functional tests
	+ non-intuitive solution; people need to follow an ugly rule
          defined somewhere


2. Remove the locale setting in the "run tests" dialog and ignore it as
   we suggest to ignore the "build id". Note that the l10n tests are
   duplicated in the subgroups:

   Advantages:

	+ easy to implement
	+ close to what we have now
	+ the l10n tests are localized without hacking Litmus server
          code
	+ allows to create extra l10n test case for a particular
          language (is it an advantage? creates a mess?)

   Disadvantages:

	+ it might be hard to maintain the l10n tests because you need
          to monitor changes in the "en" group
	+ people will see l10n test cases also for another languages
	+ it will be hard to see how many l10n test cases were finished
          in the various localizations; you would need to enter the
          "run tests" dialog with different setting
	+ still not fully intuitive solution; people are mixed when they
          see test cases for other localizations


3. Do some more changes in Litmus (suggested by Rimas):

   a) add extra checkbox into the test case edit dialog (os somewhere);
      it will mark the test case as language specific or language
      independent
   b) count the statistic of finished test cases according to the
      check box; "locale" will be ignored for language-independent
      tests;
   c) allow to transparently localize test cases => you will see
      different text in different locales (can be done later)
   d) show statistic of finished l10n tests per locale on a single page
      (can be done later)

   Advantages:

	+ clear solution
	+ it is on the way where we want to go, see
          http://wiki.documentfoundation.org/Litmus_TODO
	+ will help to keep l10n tests in sync

   Disadvantages:

	+ needs hacking in litmus (developer and time)



My opinion:
-----------

I very like the 3rd proposal (created by Rimas). I think that it is
worth to spend some time with hacking Litmus. We will profit from this
in the future a lot.

Rimas, what do you think about it?
Would you have time and appetite to look into it?

Best Regards,
Petr



More information about the Libreoffice-qa mailing list