[Libreoffice-qa] test cases quality; was: Ubuntu/Canonical doing more manual testing for LibreOffice?

Petr Mladek pmladek at suse.cz
Fri Mar 2 10:21:25 PST 2012


Sophie Gautier píše v Pá 02. 03. 2012 v 18:02 +0100:
> Petr,
> 
> First, I don't take anything personal in your mail. I disagree with you 
> but it's nothing personal :)

I hope that we could learn from each other :-)

> On 02/03/2012 17:26, Petr Mladek wrote:
> > I appreciate that you want to teach people using Litmus. Though, I am
> > afraid that you did not get my point.
> 
> I don't want to teach them using Litmus, I want them to get an interest, 
> get fun and don't feel harassed by the task.

Sure. This is my target as well. The translations checks looked boring
on the first look.


> We are testing functionalities and by the same way are checking for 
> basic i18n conversion (numbers, accentuated characters, date, size of 
> the string...)

Ok, so one example of the current test:

--- cut ---
Name: Translation check of creating a new database

Steps to Perform:

   *Open a new database file (New → Database) and check [Create a
    database] then click on Next button.

      * Check [Yes, I want the wizard to register the database] and
        [Open the database for edition] and click on Finish.
      * Enter a name for the database (using special characters in your
        language) in the dialog box and click OK. 

Expected Results:

      * the database wizard open: all strings in the dialog box and
        window are correctly localized to your own language.
--- cut ---

Ok, it checks translation and functionality.

Do we really need to check the functionality in all 100 localizations?
IMHO, if the database opens in English, it opens in all licalizations.
We do not need to force 100 people to spend time on this functional
test.

Do we need to check translation even when the strings were not changed
between the releases?

=> I strongly suggest to separate translation and functional checks. It
is very ineffective to test them together.

Thanks to Rimas, we could mark test cases as language dependent and
independent, so we have a great support for this separation.


> Litmus should be an entry for approaching QA for the community at large 
> i.e. no language barrier, no technical barrier, a team behind to guide 
> you further in more complex testing. Unfortunately, it's not a tool 
> adapted to our needs.

I agree with you. I just say that many of the current test cases sounds
crazy as they are and might point people in a wrong direction.


> As said, I'm not speaking about translation. The contents of the test 
> may confuse you when it speaks about localization, but it's only a 
> second purpose of the test, a "*while you are here*, please check that 
> the dialog has the good special characters in your language"

Yes, it is confusing because they mix the translation and functional
tests. All I want to say is that it is not effective and we should not
go this way.


> No, it's not enough, because most of the time, the team doing the 
> translation is one person only, so you can't remember where and when the 
> translation is longer than the original, and for some languages it's 
> always true.

We could use some scripting here. Andras is interested into the
translations stuff. I wonder if he has time and could help here.


> > You might say that we should check quality of the translation. I mean if
> > the translation makes sense in the context of the given dialog. Well,
> > this is not mentioned in the current test case. Also, I am not sure if
> > it is worth the effort. We do not change all strings in every release.
> > So, we do not need to check all translations.
> 
> When you see the amount of strings for the number of people doing 
> translation, having a proof reading of the dialog during QA is not a 
> luxury ;) But I agree, as said it's not the first aim of the tests

Sure. On the other hand, checking 1000 dialogs because you changed only
20 of them is not luxury as well.


> >> We had that by the past with the VCLTestool.
> >
> > Hmm, how VCLTesttool helped here? Did it checked that a string was
> > localized? Did it checked if a translation was shrinked or confusing?
> 
> It took a snapshot of each dialog, menu, submenu, etc. When you want to 
> reach a certain amount of quality for you version, it was very useful 
> because you were sure that everything was checked. I don't say that you 
> run it on each version but I did it on each major OOo versions.

I am not sure if we are speaking about the same tool. I speak about the
testtool that used the .bas scripts from the testautomation sources
module.

What do you mean by snapshot?

IMHO, it went through many dialogs and did many actions. AFAIK, it did
not check translations. It did not check that a dialog was shown
correctly. It only checked that it worked as expected.

Unfortunately, it was a pain to maintain and pain to analyze results?
Have you found any real bug by this tool?

IMHO, it gave you a false feeling that it checked something. It was
running several days. Printed many errors. I always spend few days
analyzing them. Most of them were random errors caused by asynchronous
operations or bugs in the test tool (testtool was not updated for the
new functionality). I found only very few bugs and spend many days/weeks
with it.


> Because each team has to adapt the test in his language, the basis in 
> English doesn't mention every specificities.

Yes, but only small part of the functionality is language dependent.


> Yes, you're right. But keep in mind that to teach people in their spare 
> time, they need to enjoy it. It needs to be a step by step learning, 
> growing interest as well as knowledge at the same time. And don't forget 
> the fun too. There should be very simple test cases and more complex 
> ones. Simple samples of document and much more complex ones. Defined 
> period of test to create a dynamic in the group, with visible results 
> and visible recognition, etc...

Yup. I like the test case: 
"Translation check when creating a table in a database" It makes perfect
sense as a functional test. I only do not understand why it focuses too
much on translation.

> > IMHO, it would be easier that start with functional tests rather than
> > entering hunderts of the "same" translations tests.
> 
> yes, of course :) but I think you get what I was meaning. You may see 
> localization of a test as repeating the test, when it's offering 
> somebody to come in with no other burden that the joy of participating 
> in his language and with his basic skills. Accept to waste some time 
> with less effective test but allow to more people to participate is what 
> is behind that tool.

I think that we are on the same page. After all, the new test cases are
not that bad. My only problem is that they mix functional and
translation checks.

Have a nice weekend,
Petr



More information about the Libreoffice-qa mailing list