Date: prev next · Thread: first prev next last
2014 Archives by date, by thread · List index


On Wed, Feb 19, 2014 at 4:51 PM, Bjoern Michaelsen <
bjoern.michaelsen@canonical.com> wrote:

Hi,

On Wed, Feb 19, 2014 at 09:29:36AM -0500, Kohei Yoshida wrote:
Telling core developers who diligently maintain C++ tests to maintain
Python
tests just because someone likes to write them (but not maintain them) is
equally silly.

Nobody told core developers to do so.

And you are asking the wrong question.  It's not about C++ tests vs
Python
tests; it's about what tests are appropriate for Python, and what tests
are
better to be written in C++.

No. The question is: If a volunteer shows up and says "I will write Python
test, but (for whatever reason) no C++ tests.", we will not tell them to
not do
that. That core application maintainers would prefer C++ tests is
understood --
but its entirely academic in this scenario.


Well I won't accept tests for some bugs in python and this is a valid
decision in my opinion. Sure if you want to accept and maintain python
tests for parts that you maintain, which means to debug the test when the
test fails, you are free to accept them. I'm not in favor of just accepting
whatever someone wants to do, there must be a benefit to it and I don't see
one for example in python tests testing calc core data structures.



I agree, except for the "that remains to be seen" part.  It's been seen,
and it's not helpful. ;-)

Well, how so? Reports on failures are always helpful. What is needed is
that
the bug is reported generally in the direction of those that are
interested in
fixing the root cause (root cause in the bridge -> UNO guys were the bugs
should go first, otherwise app guys). But that is a communication issue
and has
little to do with the tests themselves.


No. A failure is only helpful if it is an issue. Look at the java tests
that randomly fail because a sleep is too short on your machine or rely on
implementation details. A test and a failure is only helpful if it helps to
fix and prevent issues. A test that just adds noise is only encouraging
people to just disable tests and ignore the results.



And the investigation is hard and time-consuming, it's very discouraging
for the core developers who are unfortunate enough to deal with
failures.

Its still better than a bug without a reproduction scenario. So consider a
failed Python test a mere "bug with a good reproduction scenario" for now.

And this is non-sense.  Re-writing a test is very laborous and
un-exciting process, and nobody wants to either 1) write tests that will
be re-written completely when they fail, and 2) not many people want to
re-write existing tests.  And who would be re-writing tests?  Those who
have written the original Python tests, or those who maintain the code
that triggers failure?

I would say the UNO bridge guys will have a look at that. Its a good way to
find out if its really a bridge or a core issue. If we have a few bugs
investigated like that we will see how much of that is core and how much
is an
bridge issue. If 90% of the bugs originate in the UNO bridge, the rewrites
should mainly come from there. If its the other way around, well then other
devs should contribute too.


I doubt that. Look at the situation with the java tests where I'm the only
one who rewrites failing tests in c++. Most people just disable the test
that is failing and go on. Tests are write once debug often so spending
some more time to write good tests and in the end save much more time when
the test fails is my preferred way. I won't tell other people what they
should do or prefer in their code but I think in the end the decision is
done by the people doing the work in the code.


Since you said that it's non-sense to ask Python
test writers to write tests in C++, I would assume it's the latter.

You have to look at the reality of the market: These days, there are much
fewer
reasons to be interested in becoming or even starting as a C++ hacker than
10
years ago for a student or newcomer. Its a skill available in much less
abundance. OTOH, if people see how easy it is to 'translate' Python to C++
--
they might get a hang for it. Do assume everyone to have the same
interests and
perks as you -- we have people translating source code comments, we have
people
doing coverity fixes, we have people tweaking the build system, there is a
lot
of variance in the interest. If someone has something to offer, we should
take
advantage of that.


I don't agree here. We should not take something just to take it. It should
make sense and move the project forward. If python tests move the project
forward depends on many details. Testing the python bridge of course
requires python tests but that does not mean that every test makes sense in
python.


Aimlessly increasing the test count while most of them are already
covered in the core tests are not very useful, and only to serve to
increase the build time.

This is what I said ment with "question is when to run them". FWIW, I think
they should belong to the subsequentcheck tests (and thus not be run on
every
build) -- and they should work out of tree too like the current
subsequentchecks:

 http://skyfromme.wordpress.com/2013/03/19/autopkgtests-for-adults/

That is, you should be able to run them against a LibreOffice installation
_without_ having to do a complete build. This is something we can easily do
with Python (and which is much harder in C++) and it will allow:


I think we agreed when the python test were introduced that out-of-process
tests are not worth the pain. They are more difficult to debug and produce
much higher maintenance costs.

Basically I think it might make some sense to allow them for API tests when
the people who will maintain these tests in the future are willing to work
with them but I don't like the idea of forcing everyone to maintain python
tests. For example the original patch discussed here tried to test a calc
core bug with a python test. That one adds at least two if not three
additional layers of complexity to the test compared to a direct
implementation in ucalc. If you think python tests are necessary you can of
course voluntueer and maintain them. That includes debugging test failures
and adapting to core changes.

Regards,
Markus

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.