On Wed, 2014-02-19 at 16:51 +0100, Bjoern Michaelsen wrote:
Hi,
On Wed, Feb 19, 2014 at 09:29:36AM -0500, Kohei Yoshida wrote:
Telling core developers who diligently maintain C++ tests to maintain Python
tests just because someone likes to write them (but not maintain them) is
equally silly.
Nobody told core developers to do so.
Yet, it's the core developers who will likely be called in to answer
"hey my build failed with this python test, tell me what's wrong!!!"
And you are asking the wrong question. It's not about C++ tests vs Python
tests; it's about what tests are appropriate for Python, and what tests are
better to be written in C++.
No. The question is: If a volunteer shows up and says "I will write Python
test, but (for whatever reason) no C++ tests.", we will not tell them to not do
that. That core application maintainers would prefer C++ tests is understood --
but its entirely academic in this scenario.
Not academic at all. This is from practical point of view in case you
fail to see that.
I keep repeating myself, and will repeat again.
1) tests are designed to ease the maintainers where to look in case of
failure. The easier the process is the better. C++ tests provide that.
Python tests don't, *for core functionality bugs*.
2) if a volunteer shows up, we have the duty to tell him/her whether
something is of use, and in case it's not, steer him or her in the right
direction. But you are saying that if a volunteer shows up, wants to do
something that would not just be much use but something we don't want,
we have a duty be clear on that.
I agree, except for the "that remains to be seen" part. It's been seen,
and it's not helpful. ;-)
Well, how so? Reports on failures are always helpful. What is needed is that
the bug is reported generally in the direction of those that are interested in
fixing the root cause (root cause in the bridge -> UNO guys were the bugs
should go first, otherwise app guys).
In reality it's not clear cut, and we really don't have this layer of
expertise anymore. What typically happens is that if the test indicates
a failure in Calc, Calc devs get called in no matter where the problem
may be. And this is not academic, but speaking from my experience.
But that is a communication issue and has
little to do with the tests themselves.
It have a lot to do with how the tests should be written, and what those
tests should test.
And the investigation is hard and time-consuming, it's very discouraging
for the core developers who are unfortunate enough to deal with
failures.
Its still better than a bug without a reproduction scenario. So consider a
failed Python test a mere "bug with a good reproduction scenario" for now.
But that's not what we are talking about here. Kevin insists to write
Python test to test core functionality, which is already covered in core
test.
And this is non-sense. Re-writing a test is very laborous and
un-exciting process, and nobody wants to either 1) write tests that will
be re-written completely when they fail, and 2) not many people want to
re-write existing tests. And who would be re-writing tests? Those who
have written the original Python tests, or those who maintain the code
that triggers failure?
I would say the UNO bridge guys will have a look at that.
And who would be the UNO bridge guys? I need to keep those names in
mind.
Its a good way to
find out if its really a bridge or a core issue.
And that would require at least some investigations in the core.
If we have a few bugs
investigated like that we will see how much of that is core and how much is an
bridge issue. If 90% of the bugs originate in the UNO bridge, the rewrites
should mainly come from there. If its the other way around, well then other
devs should contribute too.
Since you said that it's non-sense to ask Python
test writers to write tests in C++, I would assume it's the latter.
You have to look at the reality of the market: These days, there are much fewer
reasons to be interested in becoming or even starting as a C++ hacker than 10
years ago for a student or newcomer.
And you need to realize the reality that this is a C++ project. Not a
python project. The core is written in C++, not Python. If all you are
interested in is attract new comers, start a new office project written
in Python all the way.
Its a skill available in much less
abundance.
Fine. Let's re-write LibreOffice in Python! Or maybe we should try PHP!
OTOH, if people see how easy it is to 'translate' Python to C++ --
they might get a hang for it. Do assume everyone to have the same interests and
perks as you
I somehow doubt that. If that's the case, we wouldn't be having this
back and forth discussion. I'm interested in writing tests that are
efficient, to the point, reproduce bugs that they are designed to test
against, and easier to improve in case that's needed.
I can tell you as someone who has written countless core unit tests,
it's increasing very difficult to write tests that can reproduce their
respective bugs. I've seen countless times where the bug happens only
in the UI but not in the test. And I've spent countless hours
investigating why that's the case and modifying the tests to ensure that
it can really reproduce bugs. Those are good tests.
-- we have people translating source code comments, we have people
doing coverity fixes, we have people tweaking the build system, there is a lot
of variance in the interest. If someone has something to offer, we should take
advantage of that.
Sure, except that, those that the tests affect most are those who work
on the core. So IMO the maintainers opinions should be taken into
account. And please don't generalize this discussion to muddy the core
issue here. You tend to do that, and I don't appreciate it.
The issue is whether Python tests should be used to test the core code,
even though the said bug is already covered in the core test.
Aimlessly increasing the test count while most of them are already
covered in the core tests are not very useful, and only to serve to
increase the build time.
This is what I said ment with "question is when to run them". FWIW, I think
they should belong to the subsequentcheck tests (and thus not be run on every
build) -- and they should work out of tree too like the current subsequentchecks:
Ok.
http://skyfromme.wordpress.com/2013/03/19/autopkgtests-for-adults/
That is, you should be able to run them against a LibreOffice installation
_without_ having to do a complete build. This is something we can easily do
with Python (and which is much harder in C++) and it will allow:
- devs that ignore subsquentchecks to continue to do so
FWIW, I don't ignore subsequentchecks. I just don't run them every time
I rebuild sc.
- CI like the one mentioned in the link
- people to run the tests on daily builds without building themselves
(this hopefully also gets more people to test master in general, esp. on Windows)
- people to go back in time and bibisect a failing test to its origin
FWIW, I do NOT want these tests to "stop the press" if one fails(*). Like the
other subsequentcheck we should have a look at them once in a while and run
them on the tagged versions (e.g. starting at alpha1 of a major) and then try
to have triaged failures around the time of the release as false positives or
real issues. The hope is this will give us some more early visibility of
critical areas -- thats all. The unoapi test in all their ugliness did so too,
see for example:
https://bugs.launchpad.net/ubuntu/+source/gcc-4.7/+bug/1017125
It was tricky enough, but how long do you thing would it take to triage a bug
where a buggy boost implementation corrupts its internal data stucture causing
a Heisenbug in LibreOffice when it reads back from that data stucture much later,
if there wasnt a reproduction scenario (no matter how bad)?
Now you are the one being academic here. Stick to the topic please,
which is whether or not Python tests should be used to test core
functionality. Nobody is talking about boost here.
But to answer that question, if we discover a bug in boost, that should
be fixed and tested in the boost project. Not us.
And also to add that, the bug that Kevin wanted to write test for was in
mdds, and I wrote test for that in mdds.
Kohei
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.