Hi,
On Mon, Apr 22, 2013 at 09:56:17PM +0200, Michael Stahl wrote:
the only thing the abstractions do here is get in the way.
I dont quite see how the abstractions play a role for the impact on complexity
here. You can even play this without the abstractions in gbuild:
Instead of using the gb_LinkTarget__use_foo voodoo in RepositoryExternal.mk
(your creation btw ;) ) you _could_ make a long spaghetti-code list of
ifeq($(OS),foo)s and hardcoded gb_Library_set_ldflags directly in the
Library_bar.mk that is linking against the external essentially ridding
yourself of all the abstraction.
I doubt it would change the complexity -- it would just move from
RepositoryExternal.mk to the module.
Here is what happens next:
- someone needs to link to the external lib from a second lib
- since he doesnt what to rework all the boring plumbing, he copies the
spaghetti -- now we have some tasty copypasta
- someone tweaks one of the two link pieces, leading to some hard to corner
heisenbug on one platform only
- after some month the root cause is discovered, the link logic for this
external lib is abstracted ad-hoc and after the fact just far enough to serve
both libs
Now you have all the duplication and abstraction and the additional fun that it
will be done slightly different for every such case.
But what we might discuss is the root of all this complexity -- which is the
number of build scenarios. This is where what the source of the pain is. With
two external libs A and B, with A linking against B, and both possibly system
or internal libs, thats already 4 build scenarios. On one platform. And then
there is:
./configure --help|grep \\-system-|wc -l
64
on windows and OSX, I guess most of those are ignored (read: hardcoded to
without-system-foo) but not on Linux. As the ultimate product on Linux is a
distro build, we should have a hard look for external libs that are widely
available, stable and in use on the platform and make them a hard requirement
-- essentially hardcoding ourselves to the distro/system package. A obvious
candidate would be zlib. _That_ would actually be a step that might bring the
complexity drown measureably. As a side-effect it kills TDF 'universal' builds
(universal for universally problematic probably).(*)
that is not true, we need to register libraries so gbuild can detect
attempted usages of non-existent libraries in partial builds. (or
actually even non-partial builds as well currently given that many
externals are implemented using gb_*_use_libraries with no corresponding
Library).
ok, while most other build systems do not mind for that, its indeed a nice
feature.
clearly that problem doesn't happen often in practice. i can't remember
a recent example.
Lets mark it as grumpy old man rumbling then.
it appears that this could be combined with PCH to be even faster.
though it's not immediately obvious to me how to implement it.
presumably there is a requirement that all files must have the same
command line parameters.
hah, I should stop giving you ideas!
Best,
Bjoern
(*) IIRC, e.g. chromium goes exactly the other way and builds in a
I-dont-care-for-your-system-libs windowsy way on linux too.
also note related: "Separating platform from apps would enhance agility"
from http://www.markshuttleworth.com/archives/1246
Context
- Re: Master branch now requires liborcus 0.5.0 or higher. (continued)
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.