[H-GEN] gnu make command-line target ordering problems?
Anthony Towns
aj at azure.humbug.org.au
Mon Oct 18 02:22:59 EDT 1999
On Mon, Oct 18, 1999 at 02:53:35PM +1000, Ben Carlyle wrote:
> Anthony Towns wrote:
> > > I think the simplest way to state my case is this:
> > > The parallel execution of command-line targets violates the make
> > > commands are processed in order.
> > Similary, the parallel execution of dependent targets violates the
> > rule that dependents are processed in order. The real question is
> > whether this is particularly useful.
> Is your in-order processing rule documented? <curious>
I'm not sure. I couldn't find it in the info pages, but I didn't look
very hard.
Ah. The relevant bit is from _Recursive Make Considered Harmful_,
``The second part of what /make/ does is to perform a /postorder/
traversal of the DAG. That is, the dependencies are visited first. The
actual order of traversal is undefined, but most make/ implementations
work down the graph from left to right for edges below the same vertex,
and most projects implicitly rely on this behaviour.''
The PMake tutorial also discusses this a little, basically in order to
say ``PMake doesn't do this (unless invoked as "make")''.
FWIW, PMake has the same behaviour as GNU make when multiple targets
are specified on the command line.
> The principal behind parallel builds is that in a Makefile with fully
> expressed dependancies, the behavioural characteristics of sequential
> and parallel make are identical.
Indeed, but this doesn't mean the parallel make should behave as-if it
were sequential.
FWIW, I'd be much more inclined towards having `make -j foo bar baz'
make in parallel, and something like `make -j --and-then foo bar baz'
make foo in parallel, then bar in parallel, and so on.
Basically, I'm arguing two things:
* the current behaviour of GNU make is a perfectly reasonable
interpretation of `make these goals in parallel'. The
alternative is too, but at least it can be simulated by make
foo && make bar, whereas simulation the other requires changes
to the Makefiles.
* your Makefiles would make more sense if they didn't depend on
either behaviour. Whether this is possible/realistic in your
environment's another matter.
> > Having a goal `build' that only works when you've already done a `clean',
> > is broken.
> In my own defense I ask you to note what happens when you modify a
> makefile. Suddenly your targets are not identical to those that
> correspond to that makefile. Do you have every single target in
> your system with a dependancy listed on the makefile?
> Personally I do not. In some cases I prefer clarity in makefiles over
> fully-defined dependancies.
No, I don't either.
The Proper(tm) way of dealing with this would be something more akin to
having a `target.d' file for each target, specifying how it is to be
build, and having each target depend on its own rules.
But then, for a full dependency graph you should also include the system
headers and such, to make sure you rebuild the appropriate things when
you upgrade your system. I don't have an issue with doing 'make clean;
make all' when that happens.
(Hmmm. Perhaps a better way of dealing with this would be something
like: ``.SEQUENTIAL : clean'' to ensure that clean is *never* executed
in parallel)
> > > Parallelism should be left to the defined make dependancies, not
> > > assumed from the commandline. make clean should run parallel
> > > clean targets. make build should run parallel build targets.
> > > make clean build should run parallel clean targets, then parallel
> > > build targets.
> > Nonsense. You should always tell make about *every* dependency in your
> > build environment. You shouldn't leave it up to chance, or implicit
> > ordering based on which one you happen to specify first. If your template
> > libraries depend on every single .c and .h file, you should tell make
> > that, and have it rebuild them itself when it thinks it needs to.
> Nonsense yourself.
:)
> This case is perhaps more complicated than you realise.
>
> Firstly, I am controlling a centralised rule set that is
> included by each makefile. The rule set has to be strict
> enough to keep the makefiles simple, but also versitile
> enough to make everything from C and C++ code to gperf
*blink*. gperf? The perfect hash generator? Neat.
> and yacc and shared libraries and static libraries to
> relocatable objects that are linked at runtime. I'm currently
> looking at around 3000 lines of included makefile. The makefiles
> themselves add up to an additional 30klocs. It's a complicated
> system with complicated dependancy structures that can either
> be expressed in every rule at a large software engineering
> and documentation cost, or the make system can be encapsulated
> into "super-targets" with simple interdependancies.
This is your project not mine, so take this with a grain of salt, but
still. Simplifying the dependencies is all very, but not at the expense
of accuracy. Changing, say:
a : x y
b : y z
to: a b : x y z
(or a b : X; X : x y z; or similar) seems perfectly legitimate to
me. But I don't see the relationship between simplifying dependencies
and depending on 'make clean build' to finish cleaning before starting
building.
> Now you'll argue that I should be running these as shell scripts,
> and that could work at the top-most level... in fact that is
> truely what I am doing. [...] At every level life is
> simple, despite the size.
...in which case I don't see what the problem is.
> I call into question your expertise with systems of this size
> and complexity. `Always' is a very strong word in software
> engineering.
You should always adhere to the assumptions your software makes. Make
assumes the Makefile specifies all the dependencies accurately, and that
there's nothing more to building a project than satisfying dependencies.
(And ad hominem attacks are pretty weak. Tsk.)
> > Personally, I think `make clean' isn't an ideal thing anyway --- make's
> > good at building stuff, having it destroy stuff too isn't Right. [...]
> That's not a bad idea. In my CM tool the only simple way to do that
> is to completely delete the work areas, then sync them back out of
> the database. This takes an hour or two for a system of this size,
> so until I have a better solution I'll continue to use make for this
> purpose.
Using CM tools that can't do basic things properly isn't Right either, of
course. Determining the relevance of this to anything remotely real is
left as an exercise to the interested reader.
OTOH, going form "aec; aeb" to "make clean; make build" makes more sense
than "make clean build" --- the former keeps destruction and construction
separate, the latter doesn't.
Cheers,
aj
--
Anthony Towns <aj at humbug.org.au> <http://azure.humbug.org.au/~aj/>
I don't speak for anyone save myself. PGP encrypted mail preferred.
``The thing is: trying to be too generic is EVIL. It's stupid, it
results in slower code, and it results in more bugs.''
-- Linus Torvalds
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 434 bytes
Desc: not available
URL: <http://lists.humbug.org.au/pipermail/general/attachments/19991018/53ebe0c2/attachment.sig>
More information about the General
mailing list