On Mon, Apr 4, 2011 at 3:02 PM, Bjoern Michaelsen
<bjoern.michaelsen@canonical.com> wrote:
Hi Norbert,
On Mon, 4 Apr 2011 11:33:40 -0500
Norbert Thiebaud <nthiebaud@gmail.com>
wrote:
iow a likely poor performing sort.
Still: sorting and uniqing <1000 strings pales compared to the
parsing and compiling of a huge amount of C++ source code (one file per
string above). It will even pale compared to preprocessing the files
(which is what ccache does). Even when using gawk hashtables for that.
Sure, but I was not comparing the extra cost to the cost of a compile,
but to the intended savings.
on one hand, every time I run make I pay the cost of these redundant stats()
on the other hand every time make has to compile something I have to
pay the cost of that extra sort
(including the cost of the fork, load program, parse it - if awk or
other interpreted. And these are just fix cost, on top of the actual
sort)
Note that the root cause of this evil has a lot to do with our
somewhat anarchic include strategy...
maybe we should re-introduce include guards ;->
Norbert
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.