On 17.05.2016 02:37, Markus Mohrhard wrote:
The first solution that come to my mind is to move a few of the tests
into a test target that is not executed as part of make. We can still
make sure that gerrit/CI are executing these tests on all platforms but
would save quite some time executing them on developer machines.
Developers that have touched code related to a feature that is part of
such a test are of course encouraged to execute the tests locally. We
already did something similar at some point with make vs make slowcheck.
A good set of tests would be all our export tests as they are by far the
tests that take most time. The big disadvantage with this approach is
that our tests are not executed on that many different configurations
any more. At least we used to find a few problems in the past when tests
failed on some strange platforms. However this might have become less of
an issue with all the improvements around crash testing, fuzzing and
static analyzers.
yes, i think that is definitely the way to go. running unit tests
during "make" only makes sense for *actual unit tests*, which is a tiny
fraction of our CppunitTests (by runtime; it's a somewhat larger
fraction of the number of CppunitTest_*.mks).
most of our CppunitTest are actually system-level or integration tests,
with a long list of linked libraries and (worse) needed UNO component
files, or even the whole services.rdb. a real unit test tests one unit,
so should be happy with one component file :)
e.g. take a look at sw/ooxmlexport_setup.mk, it needs chart2, sc,
starmath, dbaccess, ... why do we even waste time pointlessly listing
all these components there? just use services.rdb.
so the goal should be that the not-actually-unit-tests should be run by
"make check", not by "make".
the only problem with that is that currently most tinderboxes and
jenkins builders don't run "make check", so we need to change that
before we move the tests to not lose important coverage.
Another solution, well at least a way to treat the symptoms a bit, would
be to look at the existing slow tests and figure out why they are so
slow. I did that for the chart2export test already which took about 2
minutes of CPU time and discovered that it needlessly imported all the
files again (which saved 30 seconds) and that we have some really
inefficient xls/xlsx export code (responsible for another 30 seconds). I
believe that just running VALGRIND=callgrind make and then analysing the
results would help quite a bit. Again the import and export tests are
good targets for these attempts and will most likely help with the
general import and export performance.
that would be great too, although i doubt there's more than a handful of
low-hanging fruit there.
The last idea is to use more of the XPath assert stuff instead of the
import->export->import cycle and therefore making the export tests less
expensive.
well but then you don't actually test the import side any more (that it
can round-trip what was exported), especially for those cases where the
original file is in a different format than the exported file.
Context
- Re: Some thoughts about our tests and the build time (continued)
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.