Testing/Working on PyUNO?

Kohei Yoshida kohei.yoshida at collabora.com
Wed Feb 19 15:29:36 CET 2014


On Wed, 2014-02-19 at 12:32 +0100, Bjoern Michaelsen wrote:
> Hi,
> 
> On Wed, Feb 19, 2014 at 12:02:35PM +0100, Stephan Bergmann wrote:
> > The idea would be that Kevin (and others) would fill this with PyUNO
> > coding scenarios that cross their mind, discover errors in the PyUNO
> > infrastructure, ideally distill tests for those specific errors out
> > of the more general tests (that could then even go into more
> > specific code modules like pyuno or testtools), and eventually prune
> > test snippets again that are no longer useful.
> 
> That sounds very good to me. I think if the tests:
> 
> - serve as example code for Python users and thus attract more people around it 
> - test PyUNO and the underlying product
> - tests are reasonably selfcontained and do not introduce a huge framework on
>   their own (hint: like unoapi did)
> 
> that would be huge winner. As telling Python users to write C++ tests instead
> is of course nonsense -- and if our options are either getting Python tests or
> none at all, I much prefer the first.

This goes both ways. Telling core developers who diligently maintain C++
tests to maintain Python tests just because someone likes to write them
(but not maintain them) is equally silly.  And you are asking the wrong
question.  It's not about C++ tests vs Python tests; it's about what
tests are appropriate for Python, and what tests are better to be
written in C++.  This point is unfortunately not really understood
(though Stephan clearly does) and the discussion is unfortunately going
in circles as a result.

> The tricky question will be to decide when to run these tests and what to do
> with flaky tests -- aka tests that are flaky because of PyUNO and not the
> underlying code. It would be unhelpful if a huge load of PyUNO failures turn up
> at the application developers -- but that remains to be seen.

I agree, except for the "that remains to be seen" part.  It's been seen,
and it's not helpful. ;-)

> Ultimately, most tests should pass almost all the time. _If_ a test fails, it
> should always be worth investigating.

And the investigation is hard and time-consuming, it's very discouraging
for the core developers who are unfortunate enough to deal with
failures.  Note that those who volunteer to write tests will not be the
ones who deal with their failures and maintain them.  The burden will
likely be on the maintainers.  I for one spent tremendous thoughts on
ensuring that the tests are at an appropriate place, writtin in such a
way to make them easier to debug when they fail (and they are designed
to fail).

>  Part of that might be that the failing
> tests gets rewritten in C++. So we would have a big set of Python tests (as
> there are more people available to write them) and whenever one of those fails,
> it migrates to C++. As you never know beforehand which one that will be, this
> will help selecting the important ones.

And this is non-sense.  Re-writing a test is very laborous and
un-exciting process, and nobody wants to either 1) write tests that will
be re-written completely when they fail, and 2) not many people want to
re-write existing tests.  And who would be re-writing tests?  Those who
have written the original Python tests, or those who maintain the code
that triggers failure?  Since you said that it's non-sense to ask Python
test writers to write tests in C++, I would assume it's the latter.

> Recent developments made more than obvious that no matter how much tests you
> think you have, it will always be too few.

This is true. But it's not really related to this discussion.  Also, the
quality of the tests do matter as well, not just the quantity.
Aimlessly increasing the test count while most of them are already
covered in the core tests are not very useful, and only to serve to
increase the build time.

Kohei



More information about the LibreOffice mailing list