Hi,
On Wed, Feb 19, 2014 at 09:29:36AM -0500, Kohei Yoshida wrote:
Telling core developers who diligently maintain C++ tests to maintain Python
tests just because someone likes to write them (but not maintain them) is
equally silly.
Nobody told core developers to do so.
And you are asking the wrong question. It's not about C++ tests vs Python
tests; it's about what tests are appropriate for Python, and what tests are
better to be written in C++.
No. The question is: If a volunteer shows up and says "I will write Python
test, but (for whatever reason) no C++ tests.", we will not tell them to not do
that. That core application maintainers would prefer C++ tests is understood --
but its entirely academic in this scenario.
I agree, except for the "that remains to be seen" part. It's been seen,
and it's not helpful. ;-)
Well, how so? Reports on failures are always helpful. What is needed is that
the bug is reported generally in the direction of those that are interested in
fixing the root cause (root cause in the bridge -> UNO guys were the bugs
should go first, otherwise app guys). But that is a communication issue and has
little to do with the tests themselves.
And the investigation is hard and time-consuming, it's very discouraging
for the core developers who are unfortunate enough to deal with
failures.
Its still better than a bug without a reproduction scenario. So consider a
failed Python test a mere "bug with a good reproduction scenario" for now.
And this is non-sense. Re-writing a test is very laborous and
un-exciting process, and nobody wants to either 1) write tests that will
be re-written completely when they fail, and 2) not many people want to
re-write existing tests. And who would be re-writing tests? Those who
have written the original Python tests, or those who maintain the code
that triggers failure?
I would say the UNO bridge guys will have a look at that. Its a good way to
find out if its really a bridge or a core issue. If we have a few bugs
investigated like that we will see how much of that is core and how much is an
bridge issue. If 90% of the bugs originate in the UNO bridge, the rewrites
should mainly come from there. If its the other way around, well then other
devs should contribute too.
Since you said that it's non-sense to ask Python
test writers to write tests in C++, I would assume it's the latter.
You have to look at the reality of the market: These days, there are much fewer
reasons to be interested in becoming or even starting as a C++ hacker than 10
years ago for a student or newcomer. Its a skill available in much less
abundance. OTOH, if people see how easy it is to 'translate' Python to C++ --
they might get a hang for it. Do assume everyone to have the same interests and
perks as you -- we have people translating source code comments, we have people
doing coverity fixes, we have people tweaking the build system, there is a lot
of variance in the interest. If someone has something to offer, we should take
advantage of that.
Aimlessly increasing the test count while most of them are already
covered in the core tests are not very useful, and only to serve to
increase the build time.
This is what I said ment with "question is when to run them". FWIW, I think
they should belong to the subsequentcheck tests (and thus not be run on every
build) -- and they should work out of tree too like the current subsequentchecks:
http://skyfromme.wordpress.com/2013/03/19/autopkgtests-for-adults/
That is, you should be able to run them against a LibreOffice installation
_without_ having to do a complete build. This is something we can easily do
with Python (and which is much harder in C++) and it will allow:
- devs that ignore subsquentchecks to continue to do so
- CI like the one mentioned in the link
- people to run the tests on daily builds without building themselves
(this hopefully also gets more people to test master in general, esp. on Windows)
- people to go back in time and bibisect a failing test to its origin
FWIW, I do NOT want these tests to "stop the press" if one fails(*). Like the
other subsequentcheck we should have a look at them once in a while and run
them on the tagged versions (e.g. starting at alpha1 of a major) and then try
to have triaged failures around the time of the release as false positives or
real issues. The hope is this will give us some more early visibility of
critical areas -- thats all. The unoapi test in all their ugliness did so too,
see for example:
https://bugs.launchpad.net/ubuntu/+source/gcc-4.7/+bug/1017125
It was tricky enough, but how long do you thing would it take to triage a bug
where a buggy boost implementation corrupts its internal data stucture causing
a Heisenbug in LibreOffice when it reads back from that data stucture much later,
if there wasnt a reproduction scenario (no matter how bad)?
Best,
Bjoern
(*) or a tinderbox spamming the world
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.