[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27. Frequently Asked Questions about Automake

This chapter covers some questions that often come up on the mailing lists.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.1 CVS and generated files

Background: distributed generated Files

Packages made with Autoconf and Automake ship with some generated files like `configure' or `Makefile.in'. These files were generated on the developer's host and are distributed so that end-users do not have to install the maintainer tools required to rebuild them. Other generated files like Lex scanners, Yacc parsers, or Info documentation, are usually distributed on similar grounds.

Automake outputs rules in `Makefile's to rebuild these files. For instance, make will run autoconf to rebuild `configure' whenever `configure.ac' is changed. This makes development safer by ensuring a `configure' is never out-of-date with respect to `configure.ac'.

As generated files shipped in packages are up-to-date, and because tar preserves times-tamps, these rebuild rules are not triggered when a user unpacks and builds a package.

Background: CVS and Timestamps

Unless you use CVS keywords (in which case files must be updated at commit time), CVS preserves timestamp during `cvs commit' and `cvs import -d' operations.

When you check out a file using `cvs checkout' its timestamp is set to that of the revision that is being checked out.

However, during cvs update, files will have the date of the update, not the original timestamp of this revision. This is meant to make sure that make notices sources files have been updated.

This timestamp shift is troublesome when both sources and generated files are kept under CVS. Because CVS processes files in lexical order, `configure.ac' will appear newer than `configure' after a cvs update that updates both files, even if `configure' was newer than `configure.ac' when it was checked in. Calling make will then trigger a spurious rebuild of `configure'.

Living with CVS in Autoconfiscated Projects

There are basically two clans amongst maintainers: those who keep all distributed files under CVS, including generated files, and those who keep generated files out of CVS.

All Files in CVS

Generated Files out of CVS

One way to get CVS and make working peacefully is to never store generated files in CVS, i.e., do not CVS-control files that are `Makefile' targets (also called derived files).

This way developers are not annoyed by changes to generated files. It does not matter if they all have different versions (assuming they are compatible, of course). And finally, timestamps are not lost, changes to sources files can't be missed as in the `Makefile.am'/`Makefile.in' example discussed earlier.

The drawback is that the CVS repository is not an exact copy of what is distributed and that users now need to install various development tools (maybe even specific versions) before they can build a checkout. But, after all, CVS's job is versioning, not distribution.

Allowing developers to use different versions of their tools can also hide bugs during distributed development. Indeed, developers will be using (hence testing) their own generated files, instead of the generated files that will be released actually. The developer who prepares the tarball might be using a version of the tool that produces bogus output (for instance a non-portable C file), something other developers could have noticed if they weren't using their own versions of this tool.

Third-party Files

Another class of files not discussed here (because they do not cause timestamp issues) are files that are shipped with a package, but maintained elsewhere. For instance, tools like gettextize and autopoint (from Gettext) or libtoolize (from Libtool), will install or update files in your package.

These files, whether they are kept under CVS or not, raise similar concerns about version mismatch between developers' tools. The Gettext manual has a section about this, see CVS Issues: (gettext)CVS Issues section `Integrating with CVS' in GNU gettext tools.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.2 missing and AM_MAINTAINER_MODE


The missing script is a wrapper around several maintainer tools, designed to warn users if a maintainer tool is required but missing. Typical maintainer tools are autoconf, automake, bison, etc. Because file generated by these tools are shipped with the other sources of a package, these tools shouldn't be required during a user build and they are not checked for in `configure'.

However, if for some reason a rebuild rule is triggered and involves a missing tool, missing will notice it and warn the user. Besides the warning, when a tool is missing, missing will attempt to fix timestamps in a way that allows the build to continue. For instance, missing will touch `configure' if autoconf is not installed. When all distributed files are kept under CVS, this feature of missing allows a user with no maintainer tools to build a package off CVS, bypassing any timestamp inconsistency implied by `cvs update'.

If the required tool is installed, missing will run it and won't attempt to continue after failures. This is correct during development: developers love fixing failures. However, users with wrong versions of maintainer tools may get an error when the rebuild rule is spuriously triggered, halting the build. This failure to let the build continue is one of the arguments of the AM_MAINTAINER_MODE advocates.


AM_MAINTAINER_MODE allows you to choose whether the so called "rebuild rules" should be enabled or disabled. With AM_MAINTAINER_MODE([enable]), they are enabled by default, otherwise they are disabled by default. In the latter case, if you have AM_MAINTAINER_MODE in `configure.ac', and run `./configure && make', then make will *never* attempt to rebuild `configure', `Makefile.in's, Lex or Yacc outputs, etc. I.e., this disables build rules for files that are usually distributed and that users should normally not have to update.

The user can override the default setting by passing either `--enable-maintainer-mode' or `--disable-maintainer-mode' to configure.

People use AM_MAINTAINER_MODE either because they do not want their users (or themselves) annoyed by timestamps lossage (see section CVS and generated files), or because they simply can't stand the rebuild rules and prefer running maintainer tools explicitly.

AM_MAINTAINER_MODE also allows you to disable some custom build rules conditionally. Some developers use this feature to disable rules that need exotic tools that users may not have available.

Several years ago François Pinard pointed out several arguments against this AM_MAINTAINER_MODE macro. Most of them relate to insecurity. By removing dependencies you get non-dependable builds: changes to sources files can have no effect on generated files and this can be very confusing when unnoticed. He adds that security shouldn't be reserved to maintainers (what `--enable-maintainer-mode' suggests), on the contrary. If one user has to modify a `Makefile.am', then either `Makefile.in' should be updated or a warning should be output (this is what Automake uses missing for) but the last thing you want is that nothing happens and the user doesn't notice it (this is what happens when rebuild rules are disabled by AM_MAINTAINER_MODE).

Jim Meyering, the inventor of the AM_MAINTAINER_MODE macro was swayed by François's arguments, and got rid of AM_MAINTAINER_MODE in all of his packages.

Still many people continue to use AM_MAINTAINER_MODE, because it helps them working on projects where all files are kept under CVS, and because missing isn't enough if you have the wrong version of the tools.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.3 Why doesn't Automake support wildcards?

Developers are lazy. They would often like to use wildcards in `Makefile.am's, so that they would not need to remember to update `Makefile.am's every time they add, delete, or rename a file.

There are several objections to this:

Still, these are philosophical objections, and as such you may disagree, or find enough value in wildcards to dismiss all of them. Before you start writing a patch against Automake to teach it about wildcards, let's see the main technical issue: portability.

Although `$(wildcard ...)' works with GNU make, it is not portable to other make implementations.

The only way Automake could support $(wildcard ...) is by expending $(wildcard ...) when automake is run. The resulting `Makefile.in's would be portable since they would list all files and not use `$(wildcard ...)'. However that means developers would need to remember to run automake each time they add, delete, or rename files.

Compared to editing `Makefile.am', this is a very small gain. Sure, it's easier and faster to type `automake; make' than to type `emacs Makefile.am; make'. But nobody bothered enough to write a patch to add support for this syntax. Some people use scripts to generate file lists in `Makefile.am' or in separate `Makefile' fragments.

Even if you don't care about portability, and are tempted to use `$(wildcard ...)' anyway because you target only GNU Make, you should know there are many places where Automake needs to know exactly which files should be processed. As Automake doesn't know how to expand `$(wildcard ...)', you cannot use it in these places. `$(wildcard ...)' is a black box comparable to AC_SUBSTed variables as far Automake is concerned.

You can get warnings about `$(wildcard ...') constructs using the `-Wportability' flag.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.4 Limitations on File Names

Automake attempts to support all kinds of file names, even those that contain unusual characters or are unusually long. However, some limitations are imposed by the underlying operating system and tools.

Most operating systems prohibit the use of the null byte in file names, and reserve `/' as a directory separator. Also, they require that file names are properly encoded for the user's locale. Automake is subject to these limits.

Portable packages should limit themselves to POSIX file names. These can contain ASCII letters and digits, `_', `.', and `-'. File names consist of components separated by `/'. File name components cannot begin with `-'.

Portable POSIX file names cannot contain components that exceed a 14-byte limit, but nowadays it's normally safe to assume the more-generous XOPEN limit of 255 bytes. POSIX limits file names to 255 bytes (XOPEN allows 1023 bytes), but you may want to limit a source tarball to file names of 99 bytes to avoid interoperability problems with old versions of tar.

If you depart from these rules (e.g., by using non-ASCII characters in file names, or by using lengthy file names), your installers may have problems for reasons unrelated to Automake. However, if this does not concern you, you should know about the limitations imposed by Automake itself. These limitations are undesirable, but some of them seem to be inherent to underlying tools like Autoconf, Make, M4, and the shell. They fall into three categories: install directories, build directories, and file names.

The following characters:

newline " # $ ' `

should not appear in the names of install directories. For example, the operand of configure's `--prefix' option should not contain these characters.

Build directories suffer the same limitations as install directories, and in addition should not contain the following characters:

& @ \

For example, the full name of the directory containing the source files should not contain these characters.

Source and installation file names like `main.c' are limited even further: they should conform to the POSIX/XOPEN rules described above. In addition, if you plan to port to non-POSIX environments, you should avoid file names that differ only in case (e.g., `makefile' and `Makefile'). Nowadays it is no longer worth worrying about the 8.3 limits of DOS file systems.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.5 Files left in build directory after distclean

This is a diagnostic you might encounter while running `make distcheck'.

As explained in Checking the Distribution, `make distcheck' attempts to build and check your package for errors like this one.

`make distcheck' will perform a VPATH build of your package (see section Parallel Build Trees (a.k.a. VPATH Builds)), and then call `make distclean'. Files left in the build directory after `make distclean' has run are listed after this error.

This diagnostic really covers two kinds of errors:

The former left-over files are not distributed, so the fix is to mark them for cleaning (see section What Gets Cleaned), this is obvious and doesn't deserve more explanations.

The latter bug is not always easy to understand and fix, so let's proceed with an example. Suppose our package contains a program for which we want to build a man page using help2man. GNU help2man produces simple manual pages from the `--help' and `--version' output of other commands (see (help2man)Top section `Overview' in The Help2man Manual). Because we don't want to force our users to install help2man, we decide to distribute the generated man page using the following setup.

# This Makefile.am is bogus.
bin_PROGRAMS = foo
foo_SOURCES = foo.c
dist_man_MANS = foo.1

foo.1: foo$(EXEEXT)
        help2man --output=foo.1 ./foo$(EXEEXT)

This will effectively distribute the man page. However, `make distcheck' will fail with:

ERROR: files left in build directory after distclean:

Why was `foo.1' rebuilt? Because although distributed, `foo.1' depends on a non-distributed built file: `foo$(EXEEXT)'. `foo$(EXEEXT)' is built by the user, so it will always appear to be newer than the distributed `foo.1'.

`make distcheck' caught an inconsistency in our package. Our intent was to distribute `foo.1' so users do not need to install help2man, however since this rule causes this file to be always rebuilt, users do need help2man. Either we should ensure that `foo.1' is not rebuilt by users, or there is no point in distributing `foo.1'.

More generally, the rule is that distributed files should never depend on non-distributed built files. If you distribute something generated, distribute its sources.

One way to fix the above example, while still distributing `foo.1' is to not depend on `foo$(EXEEXT)'. For instance, assuming foo --version and foo --help do not change unless `foo.c' or `configure.ac' change, we could write the following `Makefile.am':

bin_PROGRAMS = foo
foo_SOURCES = foo.c
dist_man_MANS = foo.1

foo.1: foo.c $(top_srcdir)/configure.ac
        $(MAKE) $(AM_MAKEFLAGS) foo$(EXEEXT)
        help2man --output=foo.1 ./foo$(EXEEXT)

This way, `foo.1' will not get rebuilt every time `foo$(EXEEXT)' changes. The make call makes sure `foo$(EXEEXT)' is up-to-date before help2man. Another way to ensure this would be to use separate directories for binaries and man pages, and set SUBDIRS so that binaries are built before man pages.

We could also decide not to distribute `foo.1'. In this case it's fine to have `foo.1' dependent upon `foo$(EXEEXT)', since both will have to be rebuilt. However it would be impossible to build the package in a cross-compilation, because building `foo.1' involves an execution of `foo$(EXEEXT)'.

Another context where such errors are common is when distributed files are built by tools that are built by the package. The pattern is similar:

distributed-file: built-tools distributed-sources

should be changed to

distributed-file: distributed-sources
        $(MAKE) $(AM_MAKEFLAGS) built-tools

or you could choose not to distribute `distributed-file', if cross-compilation does not matter.

The points made through these examples are worth a summary:

  • Distributed files should never depend upon non-distributed built files.
  • Distributed files should be distributed with all their dependencies.
  • If a file is intended to be rebuilt by users, then there is no point in distributing it.

For desperate cases, it's always possible to disable this check by setting distcleancheck_listfiles as documented in Checking the Distribution. Make sure you do understand the reason why `make distcheck' complains before you do this. distcleancheck_listfiles is a way to hide errors, not to fix them. You can always do better.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.6 Flag Variables Ordering

What is the difference between AM_CFLAGS, CFLAGS, and
Why does automake output CPPFLAGS after
AM_CPPFLAGS on compile lines?  Shouldn't it be the converse?
My `configure' adds some warning flags into CXXFLAGS.  In
one `Makefile.am' I would like to append a new flag, however if I
put the flag into AM_CXXFLAGS it is prepended to the other
flags, not appended.

Compile Flag Variables

This section attempts to answer all the above questions. We will mostly discuss CPPFLAGS in our examples, but actually the answer holds for all the compile flags used in Automake: CCASFLAGS, CFLAGS, CPPFLAGS, CXXFLAGS, FCFLAGS, FFLAGS, GCJFLAGS, LDFLAGS, LFLAGS, LIBTOOLFLAGS, OBJCFLAGS, RFLAGS, UPCFLAGS, and YFLAGS.

CPPFLAGS, AM_CPPFLAGS, and mumble_CPPFLAGS are three variables that can be used to pass flags to the C preprocessor (actually these variables are also used for other languages like C++ or preprocessed Fortran). CPPFLAGS is the user variable (see section Variables reserved for the user), AM_CPPFLAGS is the Automake variable, and mumble_CPPFLAGS is the variable specific to the mumble target (we call this a per-target variable, see section Program and Library Variables).

Automake always uses two of these variables when compiling C sources files. When compiling an object file for the mumble target, the first variable will be mumble_CPPFLAGS if it is defined, or AM_CPPFLAGS otherwise. The second variable is always CPPFLAGS.

In the following example,

bin_PROGRAMS = foo bar
foo_SOURCES = xyz.c
bar_SOURCES = main.c

`xyz.o' will be compiled with `$(foo_CPPFLAGS) $(CPPFLAGS)', (because `xyz.o' is part of the foo target), while `main.o' will be compiled with `$(AM_CPPFLAGS) $(CPPFLAGS)' (because there is no per-target variable for target bar).

The difference between mumble_CPPFLAGS and AM_CPPFLAGS being clear enough, let's focus on CPPFLAGS. CPPFLAGS is a user variable, i.e., a variable that users are entitled to modify in order to compile the package. This variable, like many others, is documented at the end of the output of `configure --help'.

For instance, someone who needs to add `/home/my/usr/include' to the C compiler's search path would configure a package with

./configure CPPFLAGS='-I /home/my/usr/include'

and this flag would be propagated to the compile rules of all `Makefile's.

It is also not uncommon to override a user variable at make-time. Many installers do this with prefix, but this can be useful with compiler flags too. For instance, if, while debugging a C++ project, you need to disable optimization in one specific object file, you can run something like

rm file.o
make CXXFLAGS=-O0 file.o

The reason `$(CPPFLAGS)' appears after `$(AM_CPPFLAGS)' or `$(mumble_CPPFLAGS)' in the compile command is that users should always have the last say. It probably makes more sense if you think about it while looking at the `CXXFLAGS=-O0' above, which should supersede any other switch from AM_CXXFLAGS or mumble_CXXFLAGS (and this of course replaces the previous value of CXXFLAGS).

You should never redefine a user variable such as CPPFLAGS in `Makefile.am'. Use `automake -Woverride' to diagnose such mistakes. Even something like


is erroneous. Although this preserves `configure''s value of CPPFLAGS, the definition of DATADIR will disappear if a user attempts to override CPPFLAGS from the make command line.

AM_CPPFLAGS = -DDATADIR=\"$(datadir)\"

is all that is needed here if no per-target flags are used.

You should not add options to these user variables within `configure' either, for the same reason. Occasionally you need to modify these variables to perform a test, but you should reset their values afterwards. In contrast, it is OK to modify the `AM_' variables within `configure' if you AC_SUBST them, but it is rather rare that you need to do this, unless you really want to change the default definitions of the `AM_' variables in all `Makefile's.

What we recommend is that you define extra flags in separate variables. For instance, you may write an Autoconf macro that computes a set of warning options for the C compiler, and AC_SUBST them in WARNINGCFLAGS; you may also have an Autoconf macro that determines which compiler and which linker flags should be used to link with library `libfoo', and AC_SUBST these in LIBFOOCFLAGS and LIBFOOLDFLAGS. Then, a `Makefile.am' could use these variables as follows:

bin_PROGRAMS = prog1 prog2
prog1_SOURCES = …
prog2_SOURCES = …

In this example both programs will be compiled with the flags substituted into `$(WARNINGCFLAGS)', and prog2 will additionally be compiled with the flags required to link with `libfoo'.

Note that listing AM_CFLAGS in a per-target CFLAGS variable is a common idiom to ensure that AM_CFLAGS applies to every target in a `Makefile.in'.

Using variables like this gives you full control over the ordering of the flags. For instance, if there is a flag in $(WARNINGCFLAGS) that you want to negate for a particular target, you can use something like `prog1_CFLAGS = $(AM_CFLAGS) -no-flag'. If all these flags had been forcefully appended to CFLAGS, there would be no way to disable one flag. Yet another reason to leave user variables to users.

Finally, we have avoided naming the variable of the example LIBFOO_LDFLAGS (with an underscore) because that would cause Automake to think that this is actually a per-target variable (like mumble_LDFLAGS) for some non-declared LIBFOO target.

Other Variables

There are other variables in Automake that follow similar principles to allow user options. For instance, Texinfo rules (see section Texinfo) use MAKEINFOFLAGS and AM_MAKEINFOFLAGS. Similarly, DejaGnu tests (see section DejaGnu Tests) use RUNTESTDEFAULTFLAGS and AM_RUNTESTDEFAULTFLAGS. The tags and ctags rules (see section Interfacing to etags) use ETAGSFLAGS, AM_ETAGSFLAGS, CTAGSFLAGS, and AM_CTAGSFLAGS. Java rules (see section Java) use JAVACFLAGS and AM_JAVACFLAGS. None of these rules support per-target flags (yet).

To some extent, even AM_MAKEFLAGS (see section Recursing subdirectories) obeys this naming scheme. The slight difference is that MAKEFLAGS is passed to sub-makes implicitly by make itself.

However you should not think that all variables ending with FLAGS follow this convention. For instance, DISTCHECK_CONFIGURE_FLAGS (see section Checking the Distribution) and ACLOCAL_AMFLAGS (see Rebuilding Makefiles and Handling Local Macros), are two variables that are only useful to the maintainer and have no user counterpart.

ARFLAGS (see section Building a library) is usually defined by Automake and has neither AM_ nor per-target cousin.

Finally you should not think that the existence of a per-target variable implies the existance of an AM_ variable or of a user variable. For instance, the mumble_LDADD per-target variable overrides the makefile-wide LDADD variable (which is not a user variable), and mumble_LIBADD exists only as a per-target variable. See section Program and Library Variables.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.7 Why are object files sometimes renamed?

This happens when per-target compilation flags are used. Object files need to be renamed just in case they would clash with object files compiled from the same sources, but with different flags. Consider the following example.

bin_PROGRAMS = true false
true_SOURCES = generic.c
false_SOURCES = generic.c

Obviously the two programs are built from the same source, but it would be bad if they shared the same object, because `generic.o' cannot be built with both `-DEXIT_CODE=0' and `-DEXIT_CODE=1'. Therefore automake outputs rules to build two different objects: `true-generic.o' and `false-generic.o'.

automake doesn't actually look whether source files are shared to decide if it must rename objects. It will just rename all objects of a target as soon as it sees per-target compilation flags used.

It's OK to share object files when per-target compilation flags are not used. For instance, `true' and `false' will both use `version.o' in the following example.

bin_PROGRAMS = true false
true_SOURCES = true.c version.c
false_SOURCES = false.c version.c

Note that the renaming of objects is also affected by the _SHORTNAME variable (see section Program and Library Variables).

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.8 Per-Object Flags Emulation

One of my source files needs to be compiled with different flags.  How
do I do?

Automake supports per-program and per-library compilation flags (see Program and Library Variables and Flag Variables Ordering). With this you can define compilation flags that apply to all files compiled for a target. For instance, in

bin_PROGRAMS = foo
foo_SOURCES = foo.c foo.h bar.c bar.h main.c
foo_CFLAGS = -some -flags

`foo-foo.o', `foo-bar.o', and `foo-main.o' will all be compiled with `-some -flags'. (If you wonder about the names of these object files, see Why are object files sometimes renamed?.) Note that foo_CFLAGS gives the flags to use when compiling all the C sources of the program foo, it has nothing to do with `foo.c' or `foo-foo.o' specifically.

What if `foo.c' needs to be compiled into `foo.o' using some specific flags, that none of the other files requires? Obviously per-program flags are not directly applicable here. Something like per-object flags are expected, i.e., flags that would be used only when creating `foo-foo.o'. Automake does not support that, however this is easy to simulate using a library that contains only that object, and compiling this library with per-library flags.

bin_PROGRAMS = foo
foo_SOURCES = bar.c bar.h main.c
foo_CFLAGS = -some -flags
foo_LDADD = libfoo.a
noinst_LIBRARIES = libfoo.a
libfoo_a_SOURCES = foo.c foo.h
libfoo_a_CFLAGS = -some -other -flags

Here `foo-bar.o' and `foo-main.o' will all be compiled with `-some -flags', while `libfoo_a-foo.o' will be compiled using `-some -other -flags'. Eventually, all three objects will be linked to form `foo'.

This trick can also be achieved using Libtool convenience libraries, for instance `noinst_LTLIBRARIES = libfoo.la' (see section Libtool Convenience Libraries).

Another tempting idea to implement per-object flags is to override the compile rules automake would output for these files. Automake will not define a rule for a target you have defined, so you could think about defining the `foo-foo.o: foo.c' rule yourself. We recommend against this, because this is error prone. For instance, if you add such a rule to the first example, it will break the day you decide to remove foo_CFLAGS (because `foo.c' will then be compiled as `foo.o' instead of `foo-foo.o', see section Why are object files sometimes renamed?). Also in order to support dependency tracking, the two `.o'/`.obj' extensions, and all the other flags variables involved in a compilation, you will end up modifying a copy of the rule previously output by automake for this file. If a new release of Automake generates a different rule, your copy will need to be updated by hand.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.9 Handling Tools that Produce Many Outputs

This section describes a make idiom that can be used when a tool produces multiple output files. It is not specific to Automake and can be used in ordinary `Makefile's.

Suppose we have a program called foo that will read one file called `data.foo' and produce two files named `data.c' and `data.h'. We want to write a `Makefile' rule that captures this one-to-two dependency.

The naive rule is incorrect:

# This is incorrect.
data.c data.h: data.foo
        foo data.foo

What the above rule really says is that `data.c' and `data.h' each depend on `data.foo', and can each be built by running `foo data.foo'. In other words it is equivalent to:

# We do not want this.
data.c: data.foo
        foo data.foo
data.h: data.foo
        foo data.foo

which means that foo can be run twice. Usually it will not be run twice, because make implementations are smart enough to check for the existence of the second file after the first one has been built; they will therefore detect that it already exists. However there are a few situations where it can run twice anyway:

A solution that works with parallel make but not with phony dependencies is the following:

data.c data.h: data.foo
        foo data.foo
data.h: data.c

The above rules are equivalent to

data.c: data.foo
        foo data.foo
data.h: data.foo data.c
        foo data.foo

therefore a parallel make will have to serialize the builds of `data.c' and `data.h', and will detect that the second is no longer needed once the first is over.

Using this pattern is probably enough for most cases. However it does not scale easily to more output files (in this scheme all output files must be totally ordered by the dependency relation), so we will explore a more complicated solution.

Another idea is to write the following:

# There is still a problem with this one.
data.c: data.foo
        foo data.foo
data.h: data.c

The idea is that `foo data.foo' is run only when `data.c' needs to be updated, but we further state that `data.h' depends upon `data.c'. That way, if `data.h' is required and `data.foo' is out of date, the dependency on `data.c' will trigger the build.

This is almost perfect, but suppose we have built `data.h' and `data.c', and then we erase `data.h'. Then, running `make data.h' will not rebuild `data.h'. The above rules just state that `data.c' must be up-to-date with respect to `data.foo', and this is already the case.

What we need is a rule that forces a rebuild when `data.h' is missing. Here it is:

data.c: data.foo
        foo data.foo
data.h: data.c
## Recover from the removal of $@
        @if test -f $@; then :; else \
          rm -f data.c; \
          $(MAKE) $(AM_MAKEFLAGS) data.c; \

The above scheme can be extended to handle more outputs and more inputs. One of the outputs is selected to serve as a witness to the successful completion of the command, it depends upon all inputs, and all other outputs depend upon it. For instance, if foo should additionally read `data.bar' and also produce `data.w' and `data.x', we would write:

data.c: data.foo data.bar
        foo data.foo data.bar
data.h data.w data.x: data.c
## Recover from the removal of $@
        @if test -f $@; then :; else \
          rm -f data.c; \
          $(MAKE) $(AM_MAKEFLAGS) data.c; \

However there are now two minor problems in this setup. One is related to the timestamp ordering of `data.h', `data.w', `data.x', and `data.c'. The other one is a race condition if a parallel make attempts to run multiple instances of the recover block at once.

Let us deal with the first problem. foo outputs four files, but we do not know in which order these files are created. Suppose that `data.h' is created before `data.c'. Then we have a weird situation. The next time make is run, `data.h' will appear older than `data.c', the second rule will be triggered, a shell will be started to execute the `if…fi' command, but actually it will just execute the then branch, that is: nothing. In other words, because the witness we selected is not the first file created by foo, make will start a shell to do nothing each time it is run.

A simple riposte is to fix the timestamps when this happens.

data.c: data.foo data.bar
        foo data.foo data.bar
data.h data.w data.x: data.c
        @if test -f $@; then \
          touch $@; \
        else \
## Recover from the removal of $@
          rm -f data.c; \
          $(MAKE) $(AM_MAKEFLAGS) data.c; \

Another solution is to use a different and dedicated file as witness, rather than using any of foo's outputs.

data.stamp: data.foo data.bar
        @rm -f data.tmp
        @touch data.tmp
        foo data.foo data.bar
        @mv -f data.tmp $@
data.c data.h data.w data.x: data.stamp
## Recover from the removal of $@
        @if test -f $@; then :; else \
          rm -f data.stamp; \
          $(MAKE) $(AM_MAKEFLAGS) data.stamp; \

`data.tmp' is created before foo is run, so it has a timestamp older than output files output by foo. It is then renamed to `data.stamp' after foo has run, because we do not want to update `data.stamp' if foo fails.

This solution still suffers from the second problem: the race condition in the recover rule. If, after a successful build, a user erases `data.c' and `data.h', and runs `make -j', then make may start both recover rules in parallel. If the two instances of the rule execute `$(MAKE) $(AM_MAKEFLAGS) data.stamp' concurrently the build is likely to fail (for instance, the two rules will create `data.tmp', but only one can rename it).

Admittedly, such a weird situation does not arise during ordinary builds. It occurs only when the build tree is mutilated. Here `data.c' and `data.h' have been explicitly removed without also removing `data.stamp' and the other output files. make clean; make will always recover from these situations even with parallel makes, so you may decide that the recover rule is solely to help non-parallel make users and leave things as-is. Fixing this requires some locking mechanism to ensure only one instance of the recover rule rebuilds `data.stamp'. One could imagine something along the following lines.

data.c data.h data.w data.x: data.stamp
## Recover from the removal of $@
        @if test -f $@; then :; else \
          trap 'rm -rf data.lock data.stamp' 1 2 13 15; \
## mkdir is a portable test-and-set
          if mkdir data.lock 2>/dev/null; then \
## This code is being executed by the first process.
            rm -f data.stamp; \
            $(MAKE) $(AM_MAKEFLAGS) data.stamp; \
            result=$$?; rm -rf data.lock; exit $$result; \
          else \
## This code is being executed by the follower processes.
## Wait until the first process is done.
            while test -d data.lock; do sleep 1; done; \
## Succeed if and only if the first process succeeded.
            test -f data.stamp; \
          fi; \

Using a dedicated witness, like `data.stamp', is very handy when the list of output files is not known beforehand. As an illustration, consider the following rules to compile many `*.el' files into `*.elc' files in a single command. It does not matter how ELFILES is defined (as long as it is not empty: empty targets are not accepted by POSIX).

ELFILES = one.el two.el three.el …

elc-stamp: $(ELFILES)
        @rm -f elc-temp
        @touch elc-temp
        $(elisp_comp) $(ELFILES)
        @mv -f elc-temp $@

$(ELCFILES): elc-stamp
## Recover from the removal of $@
        @if test -f $@; then :; else \
          trap 'rm -rf elc-lock elc-stamp' 1 2 13 15; \
          if mkdir elc-lock 2>/dev/null; then \
## This code is being executed by the first process.
            rm -f elc-stamp; \
            $(MAKE) $(AM_MAKEFLAGS) elc-stamp; \
            rmdir elc-lock; \
          else \
## This code is being executed by the follower processes.
## Wait until the first process is done.
            while test -d elc-lock; do sleep 1; done; \
## Succeed if and only if the first process succeeded.
            test -f elc-stamp; exit $$?; \
          fi; \

For completeness it should be noted that GNU make is able to express rules with multiple output files using pattern rules (see (make)Pattern Examples section `Pattern Rule Examples' in The GNU Make Manual). We do not discuss pattern rules here because they are not portable, but they can be convenient in packages that assume GNU make.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.10 Installing to Hard-Coded Locations

My package needs to install some configuration file.  I tried to use
the following rule, but `make distcheck' fails.  Why?

# Do not do this.
        $(INSTALL_DATA) $(srcdir)/afile $(DESTDIR)/etc/afile
My package needs to populate the installation directory of another
package at install-time.  I can easily compute that installation
directory in `configure', but if I install files therein,
`make distcheck' fails.  How else should I do?

These two setups share their symptoms: `make distcheck' fails because they are installing files to hard-coded paths. In the later case the path is not really hard-coded in the package, but we can consider it to be hard-coded in the system (or in whichever tool that supplies the path). As long as the path does not use any of the standard directory variables (`$(prefix)', `$(bindir)', `$(datadir)', etc.), the effect will be the same: user-installations are impossible.

When a (non-root) user wants to install a package, he usually has no right to install anything in `/usr' or `/usr/local'. So he does something like `./configure --prefix ~/usr' to install package in his own `~/usr' tree.

If a package attempts to install something to some hard-coded path (e.g., `/etc/afile'), regardless of this `--prefix' setting, then the installation will fail. `make distcheck' performs such a `--prefix' installation, hence it will fail too.

Now, there are some easy solutions.

The above install-data-local example for installing `/etc/afile' would be better replaced by

sysconf_DATA = afile

by default sysconfdir will be `$(prefix)/etc', because this is what the GNU Standards require. When such a package is installed on an FHS compliant system, the installer will have to set `--sysconfdir=/etc'. As the maintainer of the package you should not be concerned by such site policies: use the appropriate standard directory variable to install your files so that the installer can easily redefine these variables to match their site conventions.

Installing files that should be used by another package is slightly more involved. Let's take an example and assume you want to install a shared library that is a Python extension module. If you ask Python where to install the library, it will answer something like this:

% python -c 'from distutils import sysconfig;
             print sysconfig.get_python_lib(1,0)'

If you indeed use this absolute path to install your shared library, non-root users will not be able to install the package, hence distcheck fails.

Let's do better. The `sysconfig.get_python_lib()' function actually accepts a third argument that will replace Python's installation prefix.

% python -c 'from distutils import sysconfig;
             print sysconfig.get_python_lib(1,0,"${exec_prefix}")'

You can also use this new path. If you do

The AM_PATH_PYTHON macro uses similar commands to define `$(pythondir)' and `$(pyexecdir)' (see section Python).

Of course not all tools are as advanced as Python regarding that substitution of prefix. So another strategy is to figure the part of the installation directory that must be preserved. For instance, here is how AM_PATH_LISPDIR (see section Emacs Lisp) computes `$(lispdir)':

$EMACS -batch -q -eval '(while load-path
  (princ (concat (car load-path) "\n"))
  (setq load-path (cdr load-path)))' >conftest.out
lispdir=`sed -n
  -e 's,/$,,'
  -e '/.*\/lib\/x*emacs\/site-lisp$/{
  -e '/.*\/share\/x*emacs\/site-lisp$/{

I.e., it just picks the first directory that looks like `*/lib/*emacs/site-lisp' or `*/share/*emacs/site-lisp' in the search path of emacs, and then substitutes `${libdir}' or `${datadir}' appropriately.

The emacs case looks complicated because it processes a list and expects two possible layouts, otherwise it's easy, and the benefits for non-root users are really worth the extra sed invocation.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

27.11 Debugging Make Rules

The rules and dependency trees generated by automake can get rather complex, and leave the developer head-scratching when things don't work as expected. Besides the debug options provided by the make command (see (make)Options Summary section `Options Summary' in The GNU Make Manual), here's a couple of further hints for debugging makefiles generated by automake effectively:

[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated on July, 20 2009 using texi2html 1.76.