[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

15. Support for test suites

Automake supports three forms of test suites, the first two of which are very similar.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

15.1 Simple Tests

If the variable TESTS is defined, its value is taken to be a list of programs or scripts to run in order to do the testing. Programs needing data files should look for them in srcdir (which is both an environment variable and a make variable) so they work when building in a separate directory (see (autoconf)Build Directories section `Build Directories' in The Autoconf Manual), and in particular for the distcheck rule (see section Checking the Distribution).

For each of the TESTS, the result of execution is printed along with the test name, where PASS denotes a successful test, FAIL denotes a failed test, XFAIL an expected failure, XPASS an unexpected pass for a test that is supposed to fail, and SKIP denotes a skipped test.

The number of failures will be printed at the end of the run. If a given test program exits with a status of 77, then its result is ignored in the final count. This feature allows non-portable tests to be ignored in environments where they don't make sense.

If the Automake option color-tests is used (see section Changing Automake's Behavior) and standard output is connected to a capable terminal, then the test results and the summary are colored appropriately. The user can disable colored output by setting the make variable `AM_COLOR_TESTS=no', or force colored output even without a connecting terminal with `AM_COLOR_TESTS=always'.

The variable TESTS_ENVIRONMENT can be used to set environment variables for the test run; the environment variable srcdir is set in the rule. If all your test programs are scripts, you can also set TESTS_ENVIRONMENT to an invocation of the shell (e.g. `$(SHELL) -x' can be useful for debugging the tests), or any other interpreter. For instance the following setup is used by the Automake package to run four tests in Perl.

 
TESTS_ENVIRONMENT = $(PERL) -Mstrict -I $(top_srcdir)/lib -w
TESTS = Condition.pl DisjConditions.pl Version.pl Wrap.pl

You may define the variable XFAIL_TESTS to a list of tests (usually a subset of TESTS) that are expected to fail. This will reverse the result of those tests.

Automake ensures that each file listed in TESTS is built before any tests are run; you can list both source and derived programs (or scripts) in TESTS; the generated rule will look both in srcdir and `.'. For instance, you might want to run a C program as a test. To do this you would list its name in TESTS and also in check_PROGRAMS, and then specify it as you would any other program.

Programs listed in check_PROGRAMS (and check_LIBRARIES, check_LTLIBRARIES...) are only built during make check, not during make all. You should list there any program needed by your tests that does not need to be built by make all. Note that check_PROGRAMS are not automatically added to TESTS because check_PROGRAMS usually lists programs used by the tests, not the tests themselves. Of course you can set TESTS = $(check_PROGRAMS) if all your programs are test cases.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

15.2 Simple Tests using `parallel-tests'

The option `parallel-tests' (see section Changing Automake's Behavior) enables a test suite driver that is mostly compatible to the simple test driver described in the previous section, but provides a few more features and slightly different semantics. It features concurrent execution of tests with make -j, allows to specify inter-test dependencies, lazy reruns of tests that have not completed in a prior run, summary and verbose output in `RST' (reStructuredText) and `HTML' format, and hard errors for exceptional failures. Similar to the simple test driver, TESTS_ENVIRONMENT, AM_COLOR_TESTS, XFAIL_TESTS, and the check_* variables are honored, and the environment variable srcdir is set during test execution.

This test driver is still experimental and may undergo changes in order to satisfy additional portability requirements.

The driver operates by defining a set of make rules to create a summary log file, TEST_SUITE_LOG, which defaults to `test-suite.log' and requires a `.log' suffix. This file depends upon log files created for each single test program listed in TESTS, which in turn contain all output produced by the corresponding tests.

Each log file is created when the corresponding test has completed. The set of log files is listed in the read-only variable TEST_LOGS, and defaults to TESTS, with the executable extension if any (see section Support for executable extensions), as well as any suffix listed in TEST_EXTENSIONS removed, and `.log' appended. TEST_EXTENSIONS defaults to `.test'. Results are undefined if a test file name ends in several concatenated suffixes.

For tests that match an extension .ext listed in TEST_EXTENSIONS, you can provide a test driver using the variable ext_LOG_COMPILER (note the upper-case extension) and pass options in AM_ext_LOG_FLAGS and allow the user to pass options in ext_LOG_FLAGS. It will cause all tests with this extension to be called with this driver. For all tests without a registered extension, the variables LOG_COMPILER, AM_LOG_FLAGS, and LOG_FLAGS may be used. For example,

 
TESTS = foo.pl bar.py baz
TEST_EXTENSIONS = .pl .py
PL_LOG_COMPILER = $(PERL)
AM_PL_LOG_FLAGS = -w
PY_LOG_COMPILER = $(PYTHON)
AM_PY_LOG_FLAGS = -v
LOG_COMPILER = ./wrapper-script
AM_LOG_FLAGS = -d

will invoke `$(PERL) -w foo.pl', `$(PYTHON) -v bar.py', and `./wrapper-script -d baz' to produce `foo.log', `bar.log', and `baz.log', respectively. The `TESTS_ENVIRONMENT' variable is still expanded before the driver, but should be reserved for the user.

As with the simple driver above, by default one status line is printed per completed test, and a short summary after the suite has completed. However, standard output and standard error of the test are redirected to a per-test log file, so that parallel execution does not produce intermingled output. The output from failed tests is collected in the `test-suite.log' file. If the variable `VERBOSE' is set, this file is output after the summary. For best results, the tests should be verbose by default now.

With make check-html, the log files may be converted from RST (reStructuredText, see http://docutils.sourceforge.net/rst.html) to HTML using `RST2HTML', which defaults to rst2html or rst2html.py. The variable `TEST_SUITE_HTML' contains the set of converted log files. The log and HTML files are removed upon make mostlyclean.

Even in the presence of expected failures (see XFAIL_TESTS, there may be conditions under which a test outcome needs attention. For example, with test-driven development, you may write tests for features that you have not implemented yet, and thus mark these tests as expected to fail. However, you may still be interested in exceptional conditions, for example, tests that fail due to a segmentation violation or another error that is independent of the feature awaiting implementation. Tests can exit with an exit status of 99 to signal such a hard error. Unless the variable DISABLE_HARD_ERRORS is set to a nonempty value, such tests will be counted as failed.

By default, the test suite driver will run all tests, but there are several ways to limit the set of tests that are run:

In order to guarantee an ordering between tests even with make -jN, dependencies between the corresponding log files may be specified through usual make dependencies. For example, the following snippet lets the test named `foo-execute.test' depend upon completion of the test `foo-compile.test':

 
TESTS = foo-compile.test foo-execute.test
foo-execute.log: foo-compile.log

Please note that this ordering ignores the results of required tests, thus the test `foo-execute.test' is run even if the test `foo-compile.test' failed or was skipped beforehand. Further, please note that specifying such dependencies currently works only for tests that end in one of the suffixes listed in TEST_EXTENSIONS.

Tests without such specified dependencies may be run concurrently with parallel make -jN, so be sure they are prepared for concurrent execution.

The combination of lazy test execution and correct dependencies between tests and their sources may be exploited for efficient unit testing during development. To further speed up the edit-compile-test cycle, it may even be useful to specify compiled programs in EXTRA_PROGRAMS instead of with check_PROGRAMS, as the former allows intertwined compilation and test execution (but note that EXTRA_PROGRAMS are not cleaned automatically, see section The Uniform Naming Scheme).

The variables TESTS and XFAIL_TESTS may contain conditional parts as well as configure substitutions. In the latter case, however, certain restrictions apply: substituted test names must end with a nonempty test suffix like `.test', so that one of the inference rules generated by automake can apply. For literal test names, automake can generate per-target rules to avoid this limitation.

Please note that it is currently not possible to use $(srcdir)/ or $(top_srcdir)/ in the TESTS variable. This technical limitation is necessary to avoid generating test logs in the source tree and has the unfortunate consequence that it is not possible to specify distributed tests that are themselves generated by means of explicit rules, in a way that is portable to all make implementations (see (autoconf)Make Target Lookup section `Make Target Lookup' in The Autoconf Manual, the semantics of FreeBSD and OpenBSD make conflict with this). In case of doubt you may want to require to use GNU make, or work around the issue with inference rules to generate the tests.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

15.3 DejaGnu Tests

If dejagnu appears in AUTOMAKE_OPTIONS, then a dejagnu-based test suite is assumed. The variable DEJATOOL is a list of names that are passed, one at a time, as the `--tool' argument to runtest invocations; it defaults to the name of the package.

The variable RUNTESTDEFAULTFLAGS holds the `--tool' and `--srcdir' flags that are passed to dejagnu by default; this can be overridden if necessary.

The variables EXPECT and RUNTEST can also be overridden to provide project-specific values. For instance, you will need to do this if you are testing a compiler toolchain, because the default values do not take into account host and target names.

The contents of the variable RUNTESTFLAGS are passed to the runtest invocation. This is considered a "user variable" (see section Variables reserved for the user). If you need to set runtest flags in `Makefile.am', you can use AM_RUNTESTFLAGS instead.

Automake will generate rules to create a local `site.exp' file, defining various variables detected by configure. This file is automatically read by DejaGnu. It is OK for the user of a package to edit this file in order to tune the test suite. However this is not the place where the test suite author should define new variables: this should be done elsewhere in the real test suite code. Especially, `site.exp' should not be distributed.

For more information regarding DejaGnu test suites, see (dejagnu)Top section `Top' in The DejaGnu Manual.

In either case, the testing is done via `make check'.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

15.4 Install Tests

The installcheck target is available to the user as a way to run any tests after the package has been installed. You can add tests to this by writing an installcheck-local rule.


[ << ] [ >> ]           [Top] [Contents] [Index] [ ? ]

This document was generated on July, 20 2009 using texi2html 1.76.