Regressions in Open Source projects ...
michael.meeks at suse.com
Thu Mar 15 04:51:55 PDT 2012
On Wed, 2012-03-14 at 17:32 +0000, Pedro Lino wrote:
> TBH I am quite enthusiastic that the long standing regression Bug
> #36982 (which caused data loss and was reported 10 months ago) was
> finally squashed.
Me too :-) and I'm looking at another few Windows specific bugs that
are of interest. Particularly with the new drmemory tool and Jesus'
windows / debug builds - we should be able to progress here quickly.
It'd be wonderful if we could get these traces for Windows specific
> But I do agree that killing branches (without solving all regressions)
> means that users are being left behind as the project moves forward...
Clearly that is true of some hypothetical user, for whom some serious
regression blocks them from updating. There is however an easy solution
for them - pay to have their (apparently un-interesting to the
community) bugs fixed: then they can have their regression-free release,
supported indefinately and everyone is happy :-)
The goal of having zero regressions is a really important one - no
question about it; but like all good goals it has to be balanced vs. the
other things we want to achieve: change of any kind introduces
regressions - ie. fixing bugs, moving forward, creating new features.
Unfortunately, in many cases regressions are reported long after that
work is done - so trying to accelerate and widen participation in the QA
cycle is key: thanks Cor eg. for bugging us about Linux builds of master
- where a bug is easiest, quickest, and cheapest to fix when found.
As a straw-man (and I don't think anyone suggests this) - suggesting
that we never ship until there are zero regressions would not meet this
goal: we typically find regressions only after we ship.
Then of course, there is the comparison with Linux - perhaps the
pre-eminent Free Software project we're trying to emulate. Luckily there
is extensive, and interesting data that Rafael J. Wysocki (who seems to
produce nice graphs like Rainer) has built a lot of data for this - from
the University of Warsaw in conjunction with SUSE labs it seems:
This has some really nice graphs and tables in it - I attach one of
them. Of course, while the graphs are normalized to zero - don't think
that Linux kernel releases ship with zero known regressions - they
don't; often the number is rather high, and we read things like:
"The first observation the data in Table 1 leads to is that the
total number of listed regressions from every major kernel
release is between 116 and 180. Moreover, from about 10 to
about 15 percent of listed regressions remain unresolved for
a long time."
And people in general, are not worried about the Linux Kernel being an
unusable failure for having a few regressions in it's up-stream
incarnation :-) Though of course, one bug is always one too many and we
try hard to fix them.
Please notice in the graph too that not a single kernel actually hit
zero open regressions after release, even long term maintained ones
[ and there is some uncertainty wrt. whether the fix was actually fixed
in the kernel that the regression is opened for too ].
So - I'm not overly dejected about our state, regressions are the
unwelcome side-effect of progress, and we need to keep working away at
them, and I don't want anyone to be complacent -but- we're not
spectacularly unusual or in a terrible state IMHO - at least compared to
other large free software projects.
And of course, we all spend a lot of time fixing bugs - the regression
I was chasing all of yesterday was itself caused by a bug fix for
another bug so ... ;-) more unit testing, more fixes, and a tighter QA
cycle should help a lot.
All the best,
michael.meeks at suse.com <><, Pseudo Engineer, itinerant idiot
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 78569 bytes
Desc: not available
More information about the LibreOffice