[Libreoffice-qa] 3.5.0 QA ... from BHS 1 to BHS 2
nn.libo at kflog.org
Thu Jan 12 11:41:23 PST 2012
please keep in mind that I'm by no means a QA expert, but sometimes I'm good
in expressing thoughts and fears of ordinary people ;-)
On Tuesday, January 10, 2012 09:58:48 AM Michael Meeks wrote:
> On Mon, 2012-01-09 at 16:20 +0100, Nino Novak wrote:
> So - I'd love to understand this desire for less frequent releases
> better :-) After all, we have tinderboxes churning out at least daily
> releases (in theory), perhaps several a day if we are lucky.
I think, people simply need enough time as daily spare time window might be
small: imagine about 2-3x weekly 1-2 hours, but often there's much less. So in
good times they can install one release per week and test it for one or two
1-2 hours periods in the same week. That's it.
As for the frequency: I for my part prefer to have a most-recent build for
testing, so no - the release frequency should IMHO *not* decrease. But somehow
I'd also like to have the feeling of "having enough time to test in depth".
Here a clearer prioriritization might be helpful.
I don't know if it's important, but I just wanted to mention that I very
rarely take the time to test a release according to a fixed testing plan
(Litmus etc) but most often just try to do my usual office work on copies of
my original documents in a sandbox (and if nothing suspicious happens, after
3-4 weeks those sandbox document copies become masters again and replace the
original documents). And my impression is, that many people do this "en
passant testing" and thereby discover problems or bugs.
> What is the concern about having new RC's ? is it that you think
> developers will not care about and/or test any bugs that appear in
> something one release-candidate old ? [ that seems unlikely if it is a
> serious bug ], or ? ...
For /serious/ bugs, well, ok, but what if they are not-so-serious? Where's the
And, to raise a different issue: People might well feel overwhelmed by the
release frequency. Lost in release fusillades, so to speak. I personally have
decided to concentrate on testing the most recent code line whenever possible.
But many people still do not understand the release plan, and in addition do
not know, how they can be (or make) sure that their test install will not
interfere with their productive version. The QA-FAQ does not address this
issue, you have to search for infos in the wiki...
So in summary, it may be a little bit the Mohammed - mountain problem. Cor's
activities are a good starting point and most appreciated :-)
In the end, we have the common goal to make the software working as smoothly
> > Fourth, which is more an open question, how the success of Release QA
> > could be monitored intelligently. My (naive) wish would be to have
> > usage numbers, let's say
> > - how often a Release has been launched on which OS platform without
> > failure
> We have some download statistics of those that can be extracted (I
> suspect), and we have the on-line update statistics too which may give
> some yard-stick for successful launch ;-) usually the app has to stay
> alive for a little while to do that request.
(I'd appreciate if something like that could be implemented, but the effort
should be kept low)
> > - how often which module has been started
> > - how many documents have been created/edited/viewed successfully
> > - which particular functions have been called how often successfully
> These other phone-home things are more tricky, needing coding support,
> but it's of course a good idea to ensure good code coverage. Ideally -
> I'd like to reduce the burden on human QA though, so we're investing and
> encouraging (where we can) fast automated test that run during the
> compile: so you should never get a build that has pathaological failures
> [ assuming our test are complete enough ;-]. Hopefully that makes the
> process of QA more difficult & rewarding ;-) but of course there is
> always room for lots of improvement, and some things are hard to test.
All the above written does not relate to machine tests, only to manual tests.
We should keep these two different approches well separated in discussion as
they have different needs each one. For automated tests, you need skilled
people. Manual testing can be done by Joe Average, at least in theory.
> One thing that is really nasty to test is the new
> header/footer/page-break stuff. I get intermittent leakage of
> page-breaks in documents (with several rendered on the screen); -but-
> while (after editing a document) I can reproduce them nicely, if I save
> & re-load in another instance - I cannot ;-) so - there is a real need
> for some "from a clean document" reproduction steps for those issues -
> some of which may be races too ;-) help there much appreciated.
(others have to step in here, as I didn't test header/footer much yet - except
that I wondered that deleting header/footer cannot be undone :-( )
More information about the Libreoffice-qa