<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/4/4 David Ostrovsky <span dir="ltr"><<a href="mailto:d.ostrovsky@gmx.de" target="_blank">d.ostrovsky@gmx.de</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[Moving this discussion to ML to have better visibility]<br>
<br>
with <a href="https://gerrit.libreoffice.org/#/c/3128/" target="_blank">https://gerrit.libreoffice.org/#/c/3128/</a> we have support for unit<br>
tests written in python. (We have even found two bugs with it<br>
already ... and fixed).<br>
<br>
I am not going to provide the huge advantages of dynamic type languages<br>
in general here, but while python is very impressive it *is* truly<br>
read-write language compare to number of write-only languages, that used<br>
in LO ecosystem.<br></blockquote><div><br></div><div>Do we really want to start a discussion on this level? <br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Yes, it is probably true that you can not easily debug these unit tests.<br>
But is the debuggability the only argument here? I doubt it. We have<br>
logging framework and in the end one can still migrate python unit test<br>
to C++ (if needed) to debug it.<br>
<br></blockquote></div><br></div><div class="gmail_extra">Yes debugging is the main argument for maintainability of our tests. While it has not the nicest syntax it is at least easy for the person debugging a failing test. And often (just according to math) the person debugging a failing unit test is not the one who wrote it so the argument rewrite it when you need to debug it is a bit lame.<br>
</div><div class="gmail_extra">IMO even debugging the c++ should be easier as we can see with random people running into test failures that the common advice is to disable the test instead of debugging it. I fear that we see this effect much more in the python tests as more people will follow that path when a test randomly fails (and yes every test will fail randomly at some point on a strange platform). To some degree we have the same problem in c++ but until now we were able to limit this behavior mainly to disabled test cases for BSD.<br>
<br></div><div class="gmail_extra">Also I'm not the biggest fan of the argumentation that it allows more people to write unit tests. I still believe that tests are mainly written after a bug has been fixed which means that the developer knows at least a bit of C++ and with the existing testing infrastructure adding a test case to one of the existing tests is hopefully easy enough. If it is not we should work on making it easier to write the C++ based test. Additionally an example out of Calc/Impress that the argumentation "make writing test easier and magically people will show up writing them" is not true: We have for Calc and Impress existing test frameworks that require no coding from the person writing the test and I had exactly two persons after long advocating at conferences who contributed one test case to Calc.<br>
<br></div><div class="gmail_extra">I'll stop here because I think I made my point but I have a few other arguments. Personally I'm against python based tests especially if they are out-of-process but as long as someone agrees to maintain them (that also means that this person must feel responsible when a test fails in some rare circumstances and nobody else cares) I can live with them.<br>
<br></div><div class="gmail_extra">Regards,<br></div><div class="gmail_extra">Markus<br></div></div>