[Libreoffice-qa] Regressions in Open Source projects ...

Bjoern Michaelsen bjoern.michaelsen at canonical.com
Thu Mar 22 08:29:15 PDT 2012


Hi Petr, all,

On Thu, Mar 22, 2012 at 02:44:38PM +0100, Petr Mladek wrote:
> I agree with Marcus that often it is not easy to say what functionality
> is affected. Various changes might have many side effects.

Still, developers are the ones with the best guess there.

> I am a bit scared by adding another channel and layer. Developers
> already have to provide commit message, update bugzilla, ask for review
> on the mailing list. 

You will find from the rest of the mail, that I dont want to forced devs to
write a nicely formatted complete testcase, but simple a oneline hint at what
to test along with the review request to the mailing list. The example was
(again inspired by Rainers picks) "please test copy-paste in calc". I
explicitly do _not_ want another channel, but to have the QA-readable stuff in
the review requests (an existing channel).

> It might be enough to write good commit messages and do not forget to
> mention the related bug numbers. I think that we are quite good with the
> bug numbers but we could do more user friendly commit messages. They are
> sometimes too technical, so normal user or QA does not have any idea
> what functionality is affected.

I dont think commit messages are the best place to start putting this. Devs
usually are reluctant to make guesses -- esp. if they are archived for eternity
as commit messages are. So even if we would have a commit message template
containing a "might also affect:" line the results would be thin, I guess. It
might be worth a try though.

> Developers are already pretty overloaded. I doubt that they have time to
> write detailed testcases in Litmus.

I did explicitly state, they are not required to provide "detailed testcases",
but draft oneliners for QA to pick up.

> It does not make sense to write one line in Litmus when it is already
> mentioned in the commit log.

The oneline is not intended to be written on Litmus, but in the review request
(or as a reply to a QA guy asking the dev "you seem to have commited 100
changes this major, could you kindly give me an overview what those are about".
Commit message might do so too, but see my reservations above. (That being
said: Nobody would mind developers to do stuff on Litmus themselves).

> I suggest that QA volunteers follow commit messages (would be my week
> summary still useful here?) and check affected areas. They might ask the
> developer when in doubts. It will teach developers to write better
> commit messages, ...

Weekly summary: I guess less so. Now that we have core, te webinterface is
better that those summaries. Also: QA guys browsing the git log could would
allow to have someone watching sw, someone watching sc etc. to allow divide and
conquer approaches (*).
As for better commit messages: Once a commit is done, it cant be changed. On a
review request, one can ask back and append (reply) the missing info. If that
is done viciously, we might end up with the good stuff in the commit message
already (devs are a lazy, but clever bunch).

> The same is valid for features. We already have release notes in the
> wiki page. QA guys could watch it and play with new features. They might
> just document in Litmus what they did.

I would think this is doing it the wrong way around: Devs->Release Notes->QA is
just tricky and errorprone. The release notes are out of sync with the commits,
so non-devs will always be insecure about when something is "in" a build and
from which point on the new feature is considered ready for testing. Thus a lot
of frustrating and wasteful QA: "That is buggy" Dev: "Yeah, I know, its not
done yet." exchanges coming our way.
IMHO, what really should happen is Devs->QA->Release Notes, thus devs explain
to QA what they are doing (not in release notes, but along commits) and ideally
QA (maybe together with the dev) then writes the release note entries. This
ensures:
- that QA understands what is going on
- that QA learns what is ready when
- that the release notes are actually readable by mere mortals early

> Also we must not set wrong expectations. If we force developers to
> provide a lot of information for QA, they would think that their changes
> are heavily tested and become less careful about their changes.

I would think the effect is opposite. It forces devs to think (and maybe try)
themselves how and if their change can be tested -- this alone might raise the
test coverage. ;)

> It would be great to ask for testing if we have many volunteers and
> people asking what need testing but I do not see such people.

I think this is a chicken and egg problem, we have so few people there as there
is no clear proposed offering what can be done. I firmly believe this is a
"build it and they will come" field of dreams situation.

> > > IMHO the way to go is automated testing and adding a test case for
> > > fixed bugs im possible. I fear that most of the time not even devs
> > > know all affected features so how should a normal user or qa person
> > > know what to test.
> 
> I agree here.

There is strength in numbers. If the affected area is declared roughly, the
more people we have testing, the more stuff we will find. I think the logic
trap on the developer side here is the feeling, that there is an expectation to
be "exact". Or that developers hate everything which is inprecise. The thought:

  "If I would have thought of that, I wouldnt have introduced a regression
   there."

is both proud and wrong. Broad coverage is exactly what we need here --
we can be rather sure, that devs cover the exact (obvious) cases themselves. We
need to get to the nonobvious cases -- something devs tend to ignore because it
disrupts the awesome feeling of total control. ;)

> Yes, some things can't be tested easily automatically but automatic
> testing is really important. Once you have a test, you could run it
> quickly for each release and you are sure about the state.

Just have a look at regressions and how many would be covered by generic
testcases. I bet at least half of them have some twist to it dodging automated
tests (at least the nifty cppunit tests covering well defined areas). But that
is not a problem at all, if we have enough people doing manual test coverage.

> Manual testing is time consuming, need a lot of volunteers and nobody is
> sure that they will test everything.

So lets get volunteers. As for the test coverage: That is by a few orders of
magnitude also true for automated testing.

> I am not sure how were the problematic changes communicated in the
> release notes. I agree that it is the right place to explain such
> things in advance.

The move of the user profile is one example. While sane, if it had not been in
the release notes, I bet there would have been forks and torches over that
one.

> I think that many people are still focusing on 3.5. I know that it is
> not ideal but given the resources, number of 3.5 bugs, ...

I think that is natural and ok between .0 and .1, but we should move on now. I
wonder if it makes sense to pick one snapshot build of master now (halfway
towards 3.6 branch-off) and subject it to some in depth testing (rather than
shallow testing of every daily build).

Best,

Bjoern

(*) Actually, even though that is possible with the web-interface, but my
secret plan is to get QA people to have a local git repo too has many
advantages besides the obvious one that stuff is working a lot faster locally.


More information about the Libreoffice-qa mailing list