<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Feb 19, 2014 at 4:51 PM, Bjoern Michaelsen <span dir="ltr"><<a href="mailto:bjoern.michaelsen@canonical.com" target="_blank">bjoern.michaelsen@canonical.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<div class=""><br>
On Wed, Feb 19, 2014 at 09:29:36AM -0500, Kohei Yoshida wrote:<br>
> Telling core developers who diligently maintain C++ tests to maintain Python<br>
> tests just because someone likes to write them (but not maintain them) is<br>
> equally silly.<br>
<br>
</div>Nobody told core developers to do so.<br>
<div class=""><br>
> And you are asking the wrong question. It's not about C++ tests vs Python<br>
> tests; it's about what tests are appropriate for Python, and what tests are<br>
> better to be written in C++.<br>
<br>
</div>No. The question is: If a volunteer shows up and says "I will write Python<br>
test, but (for whatever reason) no C++ tests.", we will not tell them to not do<br>
that. That core application maintainers would prefer C++ tests is understood --<br>
but its entirely academic in this scenario.<br></blockquote><div><br></div><div>Well I won't accept tests for some bugs in python and this is a valid decision in my opinion. Sure if you want to accept and maintain python tests for parts that you maintain, which means to debug the test when the test fails, you are free to accept them. I'm not in favor of just accepting whatever someone wants to do, there must be a benefit to it and I don't see one for example in python tests testing calc core data structures.<br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=""><br>
> I agree, except for the "that remains to be seen" part. It's been seen,<br>
> and it's not helpful. ;-)<br>
<br>
</div>Well, how so? Reports on failures are always helpful. What is needed is that<br>
the bug is reported generally in the direction of those that are interested in<br>
fixing the root cause (root cause in the bridge -> UNO guys were the bugs<br>
should go first, otherwise app guys). But that is a communication issue and has<br>
little to do with the tests themselves.<br></blockquote><div><br></div><div>No. A failure is only helpful if it is an issue. Look at the java tests that randomly fail because a sleep is too short on your machine or rely on implementation details. A test and a failure is only helpful if it helps to fix and prevent issues. A test that just adds noise is only encouraging people to just disable tests and ignore the results.<br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=""><br>
> And the investigation is hard and time-consuming, it's very discouraging<br>
> for the core developers who are unfortunate enough to deal with<br>
> failures.<br>
<br>
</div>Its still better than a bug without a reproduction scenario. So consider a<br>
failed Python test a mere "bug with a good reproduction scenario" for now.<br>
<div class=""><br>
> And this is non-sense. Re-writing a test is very laborous and<br>
> un-exciting process, and nobody wants to either 1) write tests that will<br>
> be re-written completely when they fail, and 2) not many people want to<br>
> re-write existing tests. And who would be re-writing tests? Those who<br>
> have written the original Python tests, or those who maintain the code<br>
> that triggers failure?<br>
<br>
</div>I would say the UNO bridge guys will have a look at that. Its a good way to<br>
find out if its really a bridge or a core issue. If we have a few bugs<br>
investigated like that we will see how much of that is core and how much is an<br>
bridge issue. If 90% of the bugs originate in the UNO bridge, the rewrites<br>
should mainly come from there. If its the other way around, well then other<br>
devs should contribute too.<br></blockquote><div><br></div><div>I doubt that. Look at the situation with the java tests where I'm the only one who rewrites failing tests in c++. Most people just disable the test that is failing and go on. Tests are write once debug often so spending some more time to write good tests and in the end save much more time when the test fails is my preferred way. I won't tell other people what they should do or prefer in their code but I think in the end the decision is done by the people doing the work in the code.<br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=""><br>
> Since you said that it's non-sense to ask Python<br>
> test writers to write tests in C++, I would assume it's the latter.<br>
<br>
</div>You have to look at the reality of the market: These days, there are much fewer<br>
reasons to be interested in becoming or even starting as a C++ hacker than 10<br>
years ago for a student or newcomer. Its a skill available in much less<br>
abundance. OTOH, if people see how easy it is to 'translate' Python to C++ --<br>
they might get a hang for it. Do assume everyone to have the same interests and<br>
perks as you -- we have people translating source code comments, we have people<br>
doing coverity fixes, we have people tweaking the build system, there is a lot<br>
of variance in the interest. If someone has something to offer, we should take<br>
advantage of that.<br></blockquote><div><br></div><div>I don't agree here. We should not take something just to take it. It should make sense and move the project forward. If python tests move the project forward depends on many details. Testing the python bridge of course requires python tests but that does not mean that every test makes sense in python. <br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=""><br>
> Aimlessly increasing the test count while most of them are already<br>
> covered in the core tests are not very useful, and only to serve to<br>
> increase the build time.<br>
<br>
</div>This is what I said ment with "question is when to run them". FWIW, I think<br>
they should belong to the subsequentcheck tests (and thus not be run on every<br>
build) -- and they should work out of tree too like the current subsequentchecks:<br>
<br>
<a href="http://skyfromme.wordpress.com/2013/03/19/autopkgtests-for-adults/" target="_blank">http://skyfromme.wordpress.com/2013/03/19/autopkgtests-for-adults/</a><br>
<br>
That is, you should be able to run them against a LibreOffice installation<br>
_without_ having to do a complete build. This is something we can easily do<br>
with Python (and which is much harder in C++) and it will allow:<br></blockquote><div><br></div><div>I think we agreed when the python test were introduced that out-of-process tests are not worth the pain. They are more difficult to debug and produce much higher maintenance costs.</div>
<br></div><div class="gmail_quote">Basically I think it might make some sense to allow them for API tests when the people who will maintain these tests in the future are willing to work with them but I don't like the idea of forcing everyone to maintain python tests. For example the original patch discussed here tried to test a calc core bug with a python test. That one adds at least two if not three additional layers of complexity to the test compared to a direct implementation in ucalc. If you think python tests are necessary you can of course voluntueer and maintain them. That includes debugging test failures and adapting to core changes.<br>
<br></div><div class="gmail_quote">Regards,<br></div><div class="gmail_quote">Markus<br></div><br></div></div>