Date: prev next · Thread: first prev next last
2011 Archives by date, by thread · List index


On Wed, Sep 7, 2011 at 4:39 AM, Bjoern Michaelsen
<bjoern.michaelsen@gmail.com> wrote:


On Wed, 7 Sep 2011 04:02:36 -0500
Norbert Thiebaud <nthiebaud-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
wrote:

really ? ok so on my Mac I build with max-job=8 num-cpu=6... what
value should I use to get the same result under the new scheme?

Counter question: Why would you want to get that result?

because countless run with many different values show that to a the
sweet spot on that box :-)

max-job=8
num-cpu=6 gets you something unpredictable between 6 and 48 jobs. The
only way why it should work is because modern unix handles
overcommitment quite decently (RAM might be more of an issue, but not
with your machine). I think you would be just as fast with the new
defaults, if not faster.

(today that means I get up to 8 jobs with up to 6 task per, except for
tail_build where I got 8 -- note that tail_build tend to run alone)

I doubt you will ever run the 8 dmake dirs unless you compile binfilter.

all I know is that if I lower that, my elapse suffer, and I can see
that the cpu is not fully loaded... (and not I/O bound either)


Now, yes,  if it works (I never tested it), the load average sound
more promising than a pure -j n... but then if it works, then why even
bother with -j n at all...
It works, but:
a) on distributed builders (icecream, distcc) the load of the host is
  not a good measure
b) having no limit on -j is risky on start as make will spawn jobs like
  mad, because most jobs are io-bound in the beginning

ok.

c) Windows has no sensible load measure

if --with-sens-load is present then use the value specified or default
to nb_cpu + 1 like you said, but then use -j without value...
see b)

Also -l will lessen the pain of overcommitment: have some old dmake dirs
running and gnu make will 'fill up' the available remaining free load
to optimal cpu-usage.

don't get me wrong I'm all for -l :-)
my beef is with the temporary complication of the max-jobs/nb-cpu rules...
add -l support, but let's leave alone nb-cpu/max-jobs for now... a few
months and we should have almost everything it not everything under
gbuild... then things will get much simpler and saner...


Best,

Bjoern

PS.:
Well, I took a look at the count of objects in our build -- ~60% are in
gbuild already, and of the remaining ones openssl, berkeleydb, icu,
autodoc (which has to die), connectivity and sal are big ones. openssl
and berkeleydb and icu are external ones which we should build with gnu
make paralellization too.

ICU might be a problem... their build system is a home-brew nightmare.
with plenty of the dreaded 'bootstrapping' involved.  (mmeek: hints
hints :-) )

And autodoc has to die (Did I say that
already?).


Norbert

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.