The deal.II Testsuite

deal.II has a testsuite with thousands of small programs that we run every time we make a change to make sure that no existing functionality is broken. It is located in the tests/ directory of the development repository. The expected output for every test is stored in an *.output next to the .cc file, and when running a test, you are notified if the output differs from the expected output.

These days, every time we add a significant piece of functionality, we add at least one new test to the testsuite, and we also do so if we fix a bug, in both cases to make sure that future changes do not break this functionality (again). Machines running the testsuite submit the results to our cdash CDash instance, a webpage showing the status of our regression tests.

  1. Setting up the testsuite
    1. For a build directory
    2. For an already installed library
  2. Running the testsuite
    1. How to interpret the output
    2. Generating coverage information
  3. Testsuite development
    1. General layout
    2. Restricting tests to build configurations
    3. Restricting tests to feature configurations
    4. Running tests with MPI
    5. Tests with multiple comparison files
    6. Changing condition for success
    7. Adding new tests
    8. Adding new categories
  4. Submitting test results
  5. Build tests
    1. Dedicated build tests

Setting up the testsuite

The testsuite is part of the development sources of deal.II and located under the tests subdirectory. The easiest way to obtain both of them is to check out the current development sources via git:

$ git clone https://github.com/dealii/dealii

For a build directory

To enable the testsuite for a given build directory, ensure that deal.II is successfully configured and built (installation is not necessary). After that you can set up the testsuite via the "setup_tests" target:

$ make setup_tests
This will set up all tests supported by the current configuration. The testsuite can now be run in the current build directory as described below.

The setup can be fine-tuned using the following commands:


$ make prune_tests - removes all testsuite subprojects

In addition, when setting up the testsuite, the following environment variables can be used to override default behavior when calling make setup_tests:

TEST_TIME_LIMIT
  - The time limit (in seconds) a single test is allowed to take. Defaults
    to 180 seconds

TEST_MPI_RANK_LIMIT
  - Specifies the maximal number of MPI ranks that can be used. If a
    test variant configures a larger number of MPI ranks (via
    .mpirun=N. in the output file) than this limit the test will be
    dropped. The special value 0 enforces no limit. Defaults to 0.

TEST_THREAD_LIMIT
  - Specifies the maximal number of worker threads that can should be
    used by the threading backend. If a test variant configures a larger
    number of threads (via .threads=N. in the output file) than this limit
    the test will be dropped. Note that individual tests might exceed this
    limit by calling MultithreadInfo::set_thread_limit(), or by manually
    creating additional threads. The special value 0 enforces no limit.
    Defaults to 0.

TEST_PICKUP_REGEX
  - A regular expression to select only a subset of tests during setup.
    An empty string is interpreted as a catchall (this is the default).
For example,
TEST_PICKUP_REGEX="umfpack" make setup_tests
will only enable tests which match the string "umfpack" in category or name.

For an already installed library

The testsuite can also be set up for an already installed library (starting with version 8.3). For this, create a build directory for the testsuite and run cmake pointing to the tests subdirectory, e.g.,

$ mkdir tests_for_installed_dealii
$ cd tests_for_installed_dealii
$ cmake -DDEAL_II_DIR=/path/to/installed/dealii /path/to/dealii_source/tests
After that the same configuration targets as described above are available.

Running the testsuite

The testsuite can now be run in the build directory via

$ ctest [-j N]
Here, N is the number of concurrent tests that should be run, in the same way as you can say make -jN. The testsuite is huge and will need around 12h on current computers running single threaded.

If you only want to run a subset of tests matching a regular expression, or if you want to exclude tests matching a regular expression, you can use

$ ctest [-j N] -R '<positive regular expression>'
$ ctest [-j N] -E '<negative regular expression>'

Note: Not all tests succeed on every machine even if all computations are correct, because your machine generates slightly different floating point outputs. To increase the number of tests that work correctly, install the numdiff tool that compares stored and newly created output files based on floating point tolerances. To use it, simply export where the numdiff executable can be found via the PATH environment variable so that it can be found during make setup_tests.

How to interpret the output

A typical output of a ctest invocation looks like:

$ ctest -j4 -R "base/thread_validity"
Test project /tmp/trunk/build
      Start 747: base/thread_validity_01.debug
      Start 748: base/thread_validity_01.release
      Start 775: base/thread_validity_05.debug
      Start 776: base/thread_validity_05.release
 1/24 Test #776: base/thread_validity_05.release ...   Passed    1.89 sec
 2/24 Test #748: base/thread_validity_01.release ...   Passed    1.89 sec
      Start 839: base/thread_validity_03.debug
      Start 840: base/thread_validity_03.release
 3/24 Test #747: base/thread_validity_01.debug .....   Passed    2.68 sec
[...]
      Start 1077: base/thread_validity_08.debug
      Start 1078: base/thread_validity_08.release
16/24 Test #1078: base/thread_validity_08.release ...***Failed    2.86 sec
18/24 Test #1077: base/thread_validity_08.debug .....***Failed    3.97 sec
[...]

92% tests passed, 2 tests failed out of 24

Total Test time (real) =  20.43 sec

The following tests FAILED:
        1077 - base/thread_validity_08.debug (Failed)
        1078 - base/thread_validity_08.release (Failed)
Errors while running CTest
If a test failed (like base/thread_validity_08.debug in above example output), you might want to find out what exactly went wrong. To this end, you can search through Testing/Temporary/LastTest.log for the exact output of the test, or you can rerun this one test, specifying -V to select verbose output of tests:
$ ctest -V -R "base/thread_validity_08.debug"
[...]
test 1077
    Start 1077: base/thread_validity_08.debug

1077: Test command: [...]
1077: Test timeout computed to be: 600
1077: Test base/thread_validity_08.debug: RUN
1077: ===============================   OUTPUT BEGIN  ===============================
1077: Built target thread_validity_08.debug
1077: Generating thread_validity_08.debug/output
1077: terminate called without an active exception
1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug
1077: base/thread_validity_08.debug: BUILD successful.
1077: base/thread_validity_08.debug: RUN failed. Output:
1077: DEAL::OK.
1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1
1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2
1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2
1077: gmake: *** [thread_validity_08.debug.diff] Error 2
1077:
1077:
1077: base/thread_validity_08.debug: ******    RUN failed    *******
1077:
1077: ===============================    OUTPUT END   ===============================
So this specific test aborted in the RUN stage.

The general output for a successful test <test> in category <category> for build type <build> is

xx: Test <category>/<test>.<build>: PASSED
xx: ===============================   OUTPUT BEGIN  ===============================
xx: [...]
xx: <category>/<test>.<build>: PASSED.
xx: ===============================    OUTPUT END   ===============================
And for a test that fails in stage <stage>:
xx: Test <category>/<test>.<build>: <stage>
xx: ===============================   OUTPUT BEGIN  ===============================
xx: [...]
xx: <category>/<test>.<build>: <stage> failed. [...]
xx:
xx: <category>/<test>.<build>: ******    <stage> failed    *******
xx: ===============================    OUTPUT END   ===============================
Hereby, <stage> indicates the stage in which the test failed: Typically, tests fail because the output has changed, and you will see this in the DIFF phase of the test.

Generating coverage information

The testsuite can also be used to provide coverage information, i.e., data that shows which lines of the library are executed how many times by running through all of the tests in the testsuite. This is of interest in finding places in the library that are not covered by the testsuite and, consequently, are prone to the inadvertent introduction of bugs since existing functionality is not subject to existing tests.

To run the testsuite in this mode, essentially, you have to do three things:

  1. build the library with appropriate profiling flags
  2. run all or some tests (built with the same profiling flags)
  3. gather all information and convert them to a viewable format.
In order to achieve the first two, configure the library with
cmake -DCMAKE_BUILD_TYPE=Debug -DDEAL_II_SETUP_COVERAGE=ON <...>
You can then build the library and run the tests as usual.

For the last point, one can in principal use whatever tool one wants. That said, the deal.II ctest driver already has builtin functionality to gather all profiling files and submit them to cdash where we already gather testsuite results (see below). You can do so by invoking

ctest -DCOVERAGE=ON <...> -S ../tests/run_testsuite.cmake
when running the testsuite, or directly by
ctest <...> -S ../tests/run_coverage.cmake

At the end of all of this, results will be shown in a separate section "Coverage" at the deal.II cdash site. In case you download the coverage report uploader for Codecov via contrib/utilities/download_codecov, the coverage report will also be uploaded to the Codecov dashboard.

Testsuite development

The following outlines what you need to know if you want to understand how the testsuite actually works, for example because you may want to add tests along with the functionality you are currently developing.

General layout

A test usually consists of a source file and an output file for comparison (under the testsuite directory tests):

category/test.cc
category/test.output
category will be one of the existing subdirectory under tests/, e.g., lac/, base/, or mpi/. Historically, we have grouped tests into the directories base/, lac/, deal.II/ depending on their functionality, and bits/ if they were small unit tests, but in practice we have not always followed this rigidly. There are also more specialized directories trilinos/, petsc/, serialization/, mpi/ etc, whose meaning is more obvious. test.cc must be a regular executable (i.e. having an int main() routine). It will be compiled, linked and run. The executable should not output anything to cout (at least under normal circumstances, i.e. no error condition), instead the executable should output to a file output in the current working directory. In practice, we rarely write the source files completely from scratch, but we find an existing test that already does something similar and copy/modify it to fit our needs.

For a normal test, ctest will typically run the following 3 stages:

Restricting tests to build configurations

Comparison file can actually be named in a more complex way than just category/test.output. In pseudo code:

category/test.[with_<string>(<=|>=|=|<|>)<on|off|version>.]*
              [mpirun=<N|all>.][threads=<N|all>.][expect=<y>.][exclusive.][<debug|release>.](output|run_only)
Normally, a test will be set up so that it runs twice, once in debug and once in release configuration. If a specific test can only be run in debug or release configurations but not in both it is possible to restrict the setup by prepending .debug or .release directly before .output, e.g.:
category/test.debug.output
This way, the test will only be set up to build and run against the debug library. If a test should run in both configurations but, for some reason, produces different output (e.g., because it triggers an assertion in debug mode), then you can just provide two different output files:
category/test.debug.output
category/test.release.output

Restricting tests to feature configurations

In a similar vein as for build configurations, it is possible to restrict tests to specific feature configurations, e.g.,

category/test.with_umfpack=on.output, or
category/test.with_zlib=off.output
These tests will only be set up if the specified feature was configured.

Note: For CMake variables indicating whether sub-packages of third party libraries were detected, like DEAL_II_TRILINOS_WITH_BELOS or DEAL_II_PETSC_WITH_HYPRE the correct syntax is adding an additional with_, i.e. with_trilinos_with_belos=on|off.

It is possible to provide different output files for disabled/enabled features, e.g.,
category/test.with_64bit_indices=on.output
category/test.with_64bit_indices=off.output
Furthermore, a test can be restricted to be run only if specific versions of a feature are available. For example
category/test.with_trilinos.geq.11.14.1.output
will only be run if (a) trilinos is available, i.e., DEAL_II_WITH_TRILINOS=TRUE and (b) if trilinos is at least of version 11.14.1. Supported operators are =, .le.;, .ge., .leq., .geq..

It is also possible to declare multiple constraints subsequently, e.g.

category/test.with_umfpack=on.with_zlib=on.output

Note: The tests in some subdirectories of tests/ are automatically run only if some feature is enabled. In this case a feature constraint encoded in the output file name is redundant and should be avoided. In particular, this holds for subdirectories distributed_grids, lapack, metis, petsc, slepc, trilinos, umfpack, gla, and mpi

Running tests with MPI or a specific thread pool size

If a test should be run with MPI in parallel, the number of MPI processes N with which a program needs to be run for comparison with a given output file is specified as follows:

category/test.mpirun=N.output
It is quite typical for an MPI-enabled test to have multiple output files for different numbers of MPI processes. Similarly, the thread pool size of a test can be specified by using threads=N, where N is the number of concurrent worker threads that should be initialized:
category/test.threads=N.output
This declaration is equivalent to setting the environment variable DEAL_II_NUM_THREADS, or calling MultithreadInfo::set_thread_limit() by hand.

In order to account for the increased computational workload of MPI parallel code the testsuite will add a processing weight to individual MPI tests equal to half of the number of MPI ranks that ensures that the machine the testsuite runs on is at most overcommitted by a factor of 2. Thread concurrency is not accounted for. This behavior is modified for performance tests. Here, the processing weight is taken directly to be the product of the number of MPI ranks times the number of threads. Particularly sensitive timing tests that have to be run exclusively without any other test running concurrently can be annotated with the .exclusive keyword:

category/test.exclusive.output
This ensures that the test in question always runs "in serial" without another test scheduled concurrently.

Tests with multiple comparison files

Sometimes it is necessary to provide multiple comparison files for a single test, for example because you want to test code on multiple platforms that produce different output files that, nonetheless, all should be considered correct. An example would be tests that use the rand() function that is implemented differently on different platforms. Additional comparison files have the same path as the main comparison file (in this case test.output) followed by a dot and a variant description:

category/test.output
category/test.output.2
category/test.output.3
category/test.output.4
The testsuite will try to match the output against all variants in alphabetical order starting with the main output file.

Warning: This mechanism is only meant as a last resort for tests where no alternative approach is viable. Especially, consider first to

  1. make the test more robust such that differences can be expressed in round-off errors detectable by numdiff.
  2. restrict comparison files to specific versions of an external feature.

Note: The main comparison file (i.e., the one ending in output) is mandatory. Otherwise, no test will be configured.

Changing condition for success

Normally a test is considered to be successful if all test stages could be run and the test reached the PASSED stage (see the output description section for details). If (for some reason) the test should succeed ending at a specific test stage different than PASSED you can specify it via expect=<stage>, e.g.:

category/test.expect=run.output

The testsuite also supports the special file ending .run_only that indicates that the diff stage should be skipped in order to reach the PASSED stage. You can specify the keyword by changing the file ending from .output to .run_only:

category/test.run_only
Note that this is semantically different from specifying expect=diff.output: The expect keyword requires that a test reaches a specified stage but fails in it. In this case the test has to reach the DIFF stage but fail it.

Adding new tests

We typically add one or more new tests every time we add new functionality to the library or fix a bug. If you want to contribute code to the library, you should do this as well. Here's how: you need a testcase and a file with the expected output.

The testcase

For the testcase, we usually start from one of the existing tests, copy and modify it to where it does what we'd like to test. Alternatively, you can also start from a template like this:

// ------------------------------------------------------------------------
//
// SPDX-License-Identifier: LGPL-2.1-or-later
// Copyright (C) 2024 by the deal.II authors
//
// This file is part of the deal.II library.
//
// Part of the source code is dual licensed under Apache-2.0 WITH
// LLVM-exception OR LGPL-2.1-or-later. Detailed license information
// governing the source code and code contributions can be found in
// LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
//
// ------------------------------------------------------------------------

// a short (a few lines) description of what the program does

#include "../tests.h"

// all include files you need here

int main ()
{
  // Initialize deallog for test output.
  // This also reroutes deallog output to a file "output".
  initlog();

  // your testcode here:
  int i = 0;
  deallog << i << std::endl;

  return 0;
}

This code opens an output file output in the current working directory and then writes all output you generate to it, through the deallog stream. The deallog stream works like any other std::ostream except that it does a few more things behind the scenes that are helpful in this context. In above case, we only write a zero to the output file. Most tests of course write computed data to the output file to make sure that whatever we compute is what we got when the test was first written.

There are a number of directories where you can put a new test. Extensive tests of individual classes or groups of classes have traditionally been into the base/, lac/, deal.II/, fe/, hp/, or multigrid/ directories, depending on where the classes that are tested are located. More atomic tests often go into bits/. There are also directories for PETSc and Trilinos wrapper functionality.

An expected output

In order to run your new test, copy it to an appropriate category and create an empty comparison file for it:

category/my_new_test.cc
category/my_new_test.output
Now, rerun
$ make setup_tests
so that your new test is picked up. After that it is possible to invoke it with
$ ctest -V -R "category/my_new_test"

If you run your new test executable this way, the test should compile and run successfully but fail in the diff stage (because of the empty comparison file). You will get an output file BUILD_DIR/category/my_new_test/output. Take a look at it to make sure that the output is what you had expected. (For complex tests, it may sometimes be impossible to say whether the output is correct, and in this case we sometimes just take it to make sure that future invocations of the test yield the same results.)

The next step is to copy and rename this output file to the source directory and replace the original comparison file with it:

category/my_new_test.output
At this point running the test again should be successful:
$ ctest -V -R "category/my_new_test"

Adding new categories

If you want to create a new category in the testsuite, create an new folder under ./tests that is named accordingly and put a CMakeLists.txt file into it containing

cmake_minimum_required(VERSION 3.13.4)
include(../setup_testsubproject.cmake)
project(testsuite CXX)
include(${DEAL_II_TARGET_CONFIG})
deal_ii_pickup_tests()

Submitting test results

To submit test results to our CDash instance just invoke ctest within a build directory (or designated build directory) with the -S option pointing to the run_testsuite.cmake script (assuming here for the moment that your build directory is a sub-directory of the source directory):

$ ctest [...] -V -S ../tests/run_testsuite.cmake
The script will run configure, build and ctest and submit the results to the CDash server. It does not matter whether the configure, build or ctest stages were run before that. Also in script mode, you can specify the same options for ctest as explained above.

Note: For also building the tests in parallel you have to provide suitable flags using MAKEOPTS as well, i.e. you would typically use

$ ctest -DMAKEOPTS="-j N" -j N [...] -V -S ../tests/run_testsuite.cmake
for compiling the library and running the tests in parallel, where N is the number of jobs you want to simultaneously execute on your machine; you would typically choose N equal to the number of processor cores you have in your machine, or maybe somewhere around one half to three quarters if you don't want to overload it or are short on memory.

It is possible to run tests and submit results for an already installed library by

mkdir build && cd build
cp $DEAL_II_SOURCE_DIR/CTestConfig.cmake .
ctest \
  -DCTEST_SOURCE_DIRECTORY=$DEAL_II_SOURCE_DIR/tests \
  -DDEAL_II_DIR=$DEAL_II_DIR \
  [...] -S $DEAL_II_SOURCE_DIR/tests/run_testsuite.cmake -V

Note: The default output in script mode is very minimal. Therefore, it is recommended to specify -V which will give the same level of verbosity as the non-script mode.

Note: The following variables can be set to via

ctest -D<variable>=<value> [...]
to control the behaviour of the run_testsuite.cmake script:
CTEST_SOURCE_DIRECTORY
  - The source directory of deal.II
  - If unspecified, "../" relative to the location of this script is
    used. If this is not a source directory, an error is thrown.

CTEST_BINARY_DIRECTORY
  - The designated build directory (already configured, empty, or non
    existent - see the information about TRACKs what will happen)
  - If unspecified the current directory is used. If the current
    directory is equal to CTEST_SOURCE_DIRECTORY or the "tests"
    directory, an error is thrown.

CTEST_CMAKE_GENERATOR
  - The CMake Generator to use (e.g. "Unix Makefiles", or "Ninja", see
    $ man cmake)
  - If unspecified the generator of a configured build directory will
    be used, otherwise "Unix Makefiles".

TRACK
  - The track the test should be submitted to. Defaults to
    "Experimental". Possible values are:

    "Experimental"     - all tests that are not specifically "build" or
                         "regression" tests should go into this track

    "Build Tests"      - Build tests that configure and build in a clean
                         directory (without actually running the
                         testsuite)

    "Regression Tests" - Reserved for the "official" regression tester

    "Continuous"       - Reserved for the "official" regression tester

CONFIG_FILE
  - A configuration file (see doc/users/config.sample)
    that will be used during the configuration stage (invokes
    $ cmake -C ${CONFIG_FILE}). This only has an effect if
    CTEST_BINARY_DIRECTORY is empty.

DESCRIPTION
  - A string that is appended to CTEST_BUILD_NAME

COVERAGE
  - If set to ON deal.II will be configured with
  DEAL_II_SETUP_COVERAGE=ON, CMAKE_BUILD_TYPE=Debug and the
  CTEST_COVERAGE() stage will be run. Test results must go into the
  "Experimental" section.

MAKEOPTS
  - Additional options that will be passed directly to make (or ninja).
Furthermore, the variables TEST_TIME_LIMIT, TEST_MPI_RANK_LIMIT, TEST_THREAD_LIMIT and TEST_PICKUP_REGEX (as described above), DIFF_DIR, and NUMDIFF_DIR can also be set and will be handed automatically down to cmake. For more details on the different tracks, see the Testing Infrastructure Wiki page.

Build tests

Build tests are used to check that deal.II can be compiled on different systems and with different compilers as well as different configuration options. Results are collected in the "Build Tests" track in CDash.

Running the build test suite is simple and we encourage deal.II users with configurations not found on the CDash page to participate. Assuming you checked out deal.II into the directory dealii, running it is as simple as:

mkdir dealii/build
cd dealii/build
ctest -j4 -S ../tests/run_buildtest.cmake

What this does is to compile and build deal.II in the directory build (which includes building all configurable tutorial programs as well) but does not run the full testsuite. The results are sent to the CDash instance.

Note: Build tests require the designated build directory to be completely empty. If you want to specify a build configuration for cmake use a configuration file to preseed the cache as explained above:

$ ctest -DCONFIG_FILE="[...]/config.sample" [...]

Dedicated build tests

Build tests work best if they run automatically and periodically. There is a detailed example for such dedicated build tests on the wiki.


Valid HTML 4.01! Valid CSS!