Reference documentation for deal.II version 9.6.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The 'Parallel in Time Heat Equation' code gallery program

This program was contributed by Joshua Christopher <chrisjoshtopher@gmail.com>.
It comes without any warranty or support by its authors or the authors of deal.II.

This program is part of the deal.II code gallery and consists of the following files (click to inspect):

Pictures from this code gallery program

Annotated version of Readme.md

Overview

Over the last few years, the clock speed per processor core has stagnated. This stagnation has lead to the design of larger core counts in high performance computing machines. As a result of these developments, increased concurrency in numerical algorithms must be developed in order to take advantage of this architecture style. Perhaps the largest bottleneck in concurrency for time-dependent simulations is traditional time integration methods. Traditional time integration methods solve for the time domain sequentially, and as the spatial grid is refined a proportionally larger number of time steps must be taken to maintain accuracy and stability constraints. While solving the time domain sequentially with a traditional time integration method is an optimal algorithm of order \(\mathcal{O}(n)\), the \(n\) time steps are not solved concurrently.

The goal of this project is to make use of the XBraid library from Lawrence Livermore National Laboratory to solve the time domain in parallel using multigrid reduction in time techniques. The XBraid library is implemented in C and aims to be a non-intrusive method to implement parallel time marching methods into existing codes.

Implementation

XBraid introduction

In order to use the XBraid library, several data structures and functions must be implemented and provided to the XBraid solver struct. The two required data structures are the app and vector structures. In general, the app struct contains the time independent data and the vector struct contains the time dependent data. For this initial example, the time independent data includes the mesh which is fixed for all time steps, and the time dependent data is the solution state vector. The functions tell XBraid how to perform operations on the data type used by your solver, in this case deal.II uses the Vector data type. These operations include how to initialize the data at a given time, how to sum the data, and how to pack and unpack linear buffers for transmission to other processors via MPI. The XBraid documentation should be read for a full list of functions that must be implemented and the details of what the function should do.

The typical format is the function is called with arguments of the app struct, one or more vector structs, and a status struct that contains information on the current status of the XBraid simulation (the current multigrid iteration, the level the function is being called from, the time and timestep number, etc.).

Perhaps the most important function is the step function. This function tells XBraid how to advance the solution forward in time from the initial to the final times given in the status struct. This method uses a traditional time integration method such as the fourth order explicit Runge Kutta method.

deal.II details

The solver used in this example is based off the heat equation solver from the step-26 tutorial of deal.II. The HeatEquation class becomes member data to XBraid’s app struct, and XBraid’s vector struct becomes a wrapper for deal.II’s Vector data type. The HeatEquation class cannot simply be used as is though as it contains both time dependent and time independent member data. In order to simplify the problem the adaptive mesh refinement is removed. Theoretically XBraid is capable of working with adaptive mesh refinement and in fact contains support for time refinement (which is also not used for simplicity). All adaptive mesh refinement functionality is removed from the solver. The time-dependent solution state vectors are also removed from the HeatEquation member data. That time-dependent data will be provided at each timestep by XBraid via the vector struct.

Governing Equations

In the default mode, this code solves the heat equation,

\begin{align} \frac{\partial u}{\partial t} - \Delta u = f(\boldsymbol{x},t), \qquad \forall\boldsymbol{x}\in\Omega,t\in\left( 0,T \right), \end{align}

with initial conditions,

\begin{align} u(\boldsymbol{x},0) = u_0(\boldsymbol{x}) = 0, \qquad \forall \boldsymbol{x}\in\Omega, \end{align}

and Dirichlet boundary conditions,

\begin{align} u(\boldsymbol{x},t) = g(\boldsymbol{x},t) = 0, \qquad \forall \boldsymbol{x}\in\partial\Omega,t\in\left( 0,T \right) \end{align}

and forcing function,

\begin{align} f(\mathbf x, t) = \left\{ \begin{array}{ll} \chi_1(t) & \text{if \(x>0.5\) and \(y>-0.5\)} \\ \chi_2(t) & \text{if \(x>-0.5\) and \(y>0.5\)} \end{array} \right. \end{align}

with,

\begin{align} \chi_1(t) = \exp\left(-0.5\frac{(t-0.125)^2}{0.005}\right) \end{align}

and,

\begin{align} \chi_2(t) = \exp\left(-0.5\frac{(t-0.375)^2}{0.005}\right) \end{align}

for some time \(t\). The forcing function is a Gaussian pulse in time that is centered around 0.125 time units for \(\chi_1\) and 0.375 time units for \(\chi_2\). A Gaussian function was chosen because it is a continuous function in time and so the solution state can be compared bit-wise with the serial-in-time solution.

Method of Manufactured Solutions

The method of manufactured solutions is used to test the correctness of the implementation. In the method of manufactured solutions, we create a solution \(u_h\) to the heat equation, then compute the boundary conditions, initial conditions, and forcing functions required to generate that solution. This method is explained further in the step-7 tutorial of deal.II. The created solution used is,

\begin{align} u_h = \exp\left( -4\pi^2t \right) \cos(2 \pi x) \cos(2 \pi y), \qquad \forall \boldsymbol{x} \in \Omega \cup \partial\Omega \end{align}

with derivatives,

\begin{align} \frac{\partial u}{\partial t} &= -4 \pi^2 \exp{-4\pi 2t} \cos(2 \pi x)\cos(2 \pi y), \\ -\Delta u &= 8 \pi^2 \exp\left(-4\pi^2t\right)\cos(2 \pi x)\cos(2 \pi y) \\ \frac{\partial u}{\partial x} &= -2\pi \exp\left(-4\pi^2t\right)\sin(2\pi x)\cos(2\pi y) \\ \frac{\partial u}{\partial x} &= -2 \pi \exp\left(-4\pi^2t\right)\cos(2\pi x)\sin(2\pi y) \end{align}

and therefore we specify the forcing term, initial conditions, and boundary conditions of the governing equations as,

\begin{align} f(\boldsymbol{x},t) &= 4 \pi^2 \exp\left(-4\pi^2t\right)\cos(2 \pi x)\cos(2 \pi y), &&\forall\boldsymbol{x}\in\Omega,t\in\left( 0,T \right), \\ u_0(\boldsymbol{x}) &= \cos(2 \pi x)\cos(2\pi y), &&\forall \boldsymbol{x}\in\Omega, \\ g(\boldsymbol{x},t) &= \exp\left(-4\pi^2t\right)\cos(2 \pi x)\cos(2\pi y), &&\forall \boldsymbol{x} \in \partial\Omega. \end{align}

The manufactured solution is run on progressively more refined grids and the solution generated by the finite element method is compared to the exact solution \(u_h\). The convergence rate of the error is calculated with

\begin{align} \Delta \epsilon_n = \frac{\ln{\epsilon_{n-1}/\epsilon_{n}}}{\ln{\frac\Delta x_{n-1}/\Delta x_{n}}} \end{align}

where \(\Delta \epsilon_n\) is the convergence rate of error \(\epsilon\) between a mesh \(n\) and coarser mesh \(n-1\) that have a refinement ratio of \(r_n\). Shown in Table 1 are the errors of the \(\textrm{L}^2\), \(\textrm{H}^1\), and \(\textrm{L}^\infty\) norms as the mesh is refined, and shown in Table 2 is the convergence rate. The \(\Delta t\) is reduced by a factor of 2 for every global refinement of the mesh.

cycles # cells # dofs \(\textrm{L}^2\)-error \(\textrm{H}^1\)-error \(\textrm{L}^\infty\)-error
125 48 65 6.036e-03 6.970e-02 7.557e-03
250 192 225 1.735e-03 3.414e-02 2.721e-03
500 768 833 4.513e-04 1.690e-02 7.410e-04
1000 3072 3201 1.140e-04 8.426e-03 1.877e-04
2000 12288 12545 2.859e-05 4.209e-03 4.715e-05
cycles # cells # dofs Slope \(\textrm{L}^2\) Slope \(\textrm{H}^1\) Slope \(\textrm{L}^\infty\)
125 48 65
250 192 225 1.798 1.030 1.474
500 768 833 1.943 1.014 1.877
1000 3072 3201 1.985 1.004 1.981
2000 12288 12545 1.995 1.001 1.993

The convergence rate is plotted as a function of the grid spacing in the figure below. As can be seen, the slope converges at a second order rate for the \(\textrm{L}^2\) and \(\textrm{L}^\infty\) norms and at a first order rate for the \(\textrm{H}^1\) norms. This is expected behavior as second order finite elements are used.

Grid convergence results

Code Organization

The src directory

The entry point of the code is in parallel_in_time.cc and sets up XBraid for a simulation. The XBraid setup involves initializing the app struct and configuring XBraid for the desired number of timesteps, number of iterations, and so forth. The functions implemented for XBraid’s use are declared in BraidFuncs.hh and defined in BraidFuncs.cc. The HeatEquation class and all deal.II functionality is declared in HeatEquation.hh and defined in HeatEquationImplem.hh. Since HeatEquation is a class template, its definition file HeatEquationImplem.hh is included at the bottom of HeatEquation.hh. Lastly various helper functions and variables such as the current processor id and the output stream are declared in Utilities.hh and defined in Utilities.cc.

The test directory

This directory contains tests to be built and run with CMake. These tests verify the correct implementation of the various functions.

The doc directory

This directory is for storing further documentation of the code. Not much is in this directory right now as most of the documentation is in the Readme or in comments of the source code files. Documentation is generated from the Readme and code comments by deal.II’s documentation process.

Compiling

To compile, you need deal.II and XBraid to be installed with development headers somewhere on your system. Some implementation of MPI such as OpenMPI with development headers must also be installed. The source code for deal.II is available at deal.II’s website and the source code for XBraid is available at LLNL’s website. See the documentation of each package for compilation and installation instructions.

Depending on where they are installed, parallel_in_time may need help finding these libraries. To find deal.II, parallel_in_time first looks in typical deal.II install directories followed by one directory up (../), two directories up (../../), and lastly in the environment variable DEAL_II_DIR. In contrast, XBraid currently does not have any default locations to look for and so the environment variable BRAID_DIR must be specified. For MPI, parallel_in_time looks in standard installation folders only, for that reason I recommend you install MPI with your package manager.

A compile process of the parallel in time code may look like,

mkdir build
cd build
BRAID_DIR=/path/to/braid/ cmake ../
make

There is currently no option to install parallel_in_time anywhere. The binaries are generated in the bin folder, and tests are placed into the test folder. Options that can be passed to CMake for parallel_in_time include:

CMAKE_BUILD_TYPE=Debug/Release
DO_MFG=ON/OFF
USE_MPI=ON/OFF

The build type specifies whether to compile with debugging symbols, assertions, and optimizations or not.

The option for manufactured solutions (DO_MFG) switches from solving the “standard” heat equation to solving a known solution heat equation so that the correctness of the code can be tested.

Lastly the MPI option is only used to specify where to write output information when using the pout() function from the Utilities.hh file. If USE_MPI is set to ON, then every processor writes to its own file called pout.<#> where <#> is the processor number. If USE_MPI is set to OFF, then every processor writes to stdout.

Running

Once parallel_in_time has been compiled, the program can be run by calling the binary generated in ./build/bin/. The test can be run by calling ctest from inside the build/ directory. Unless the output path has been changed in the source code (currently hard coded), then the output files will be placed into the folder the command was called from.

There are no argument parameters for any of the binaries or tests.

Results

To test the performance, a strong scaling study is run. The spatial grid is fixed at 3201 degrees of freedom, and the spatial grid consists of 25,000 uniform timesteps. No spatial parallelization is used and the grid is fixed for all timesteps. The parallel in time solution is solved using XBraid’s multigrid reduction in time algorithm on 1, 2, 4, 16, 32, and 64 processors. The serial in time solution is run on a single processor using traditional sequential time stepping. The results are shown in figure below. Running the multigrid algorithm on a single processor takes about an order of magnitude longer to run on a single processor than the serial algorithm on a single processor. At 16 processors the multigrid algorithm the wall clock time is approximately the same for the serial algorithm as for the multigrid algorithm, and for 32 and 64 processors in time the wall clock is faster by about a factor of 2 for 64 processors.

Strong scaling results

Conclusions

Using 64 times as many processors results in a speedup factor of approximately 2. This is a fairly heavy computational cost for a slight speedup. For comparison, in the reference paper they achieved a speedup of approximately 10 times when using 256 times as many processors as their serial case when solving the heat equation in two dimensions. A similar increase in processors may be needed to increase the speedup factor of this code from 2 to 10. The choice of whether to use serial in time or parallel in time largely rests in the size of problem being solved, the amount of computing resources available, and the urgency of the problem being solved. Increasing the required number of timesteps will benefit the parallel in time algorithm provided enough extra resources are available.

Future Work

There are many routes available for future work. First there are many optimizations that could be made to speed up the existing code base such as exploring the different multigrid cycles and finding the optimal load balancing. Ultimately those optimizations will probably only result in marginal gains per the reference paper. Allowing XBraid to prolong and restrict the spatial grid may be one of the more promising avenues of improvement.

Future work that is of interest to the authors of XBraid is the development of adaptive mesh refinement (AMR) in a parallel in time algorithm. A particular challenges with parallel-in-time AMR is the time subcylcing that occurs in spatial subdomains with sequential time stepping. This code base does not use spatial subdomains with AMR and so could provide an easier understanding of the coarsening and refining of the time domain.

Annotated version of src/BraidFuncs.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #include "BraidFuncs.hh"
 

This advances the solution forward by one time step. First some data is collected from the status struct, namely the start and stop time and the current timestep number. The timestep size \(\Delta t\) is calculated, and the step function from the HeatEquation is used to advance the solution.

  int my_Step(braid_App app,
  braid_Vector ustop,
  braid_Vector fstop,
  braid_Vector u,
  braid_StepStatus status)
  {
  UNUSED(ustop);
  UNUSED(fstop);
  double tstart; /* current time */
  double tstop; /* evolve to this time*/
  int level;
  double deltaT;
 
  int index;
  braid_StepStatusGetLevel(status, &level);
  braid_StepStatusGetTstartTstop(status, &tstart, &tstop);
  braid_StepStatusGetTIndex(status, &index);
 
  deltaT = tstop - tstart;
 
  ::Vector<double>& solution = u->data;
 
  HeatEquation<2>& heateq = app->eq;
 
  heateq.step(solution, deltaT, tstart, index);
 
  return 0;
  }
 
 
unsigned int level
Definition grid_out.cc:4626

In this function we initialize a vector at an arbitrary time. At this point we don't know anything about what the solution looks like, and we can really initialize to anything, so in this case use reinit to initialize the memory and set the values to zero.

  int
  my_Init(braid_App app,
  double t,
  braid_Vector *u_ptr)
  {
  my_Vector *u = new(my_Vector);
  int size = app->eq.size();
  u->data.reinit(size);
 
  app->eq.initialize(t, u->data);
 
  *u_ptr = u;
 
  return 0;
  }
 

Here we need to copy the vector u into the vector v. We do this by allocating a new vector, then reinitializing the deal.ii vector to the correct size. The deal.ii reinitialization sets every value to zero, so next we need to iterate over the vector u and copy the values to the new vector v.

  int
  my_Clone(braid_App app,
  braid_Vector u,
  braid_Vector *v_ptr)
  {
  UNUSED(app);
  my_Vector *v = new(my_Vector);
  int size = u->data.size();
  v->data.reinit(size);
  for(size_t i=0, end=v->data.size(); i != end; ++i)
  {
  v->data[i] = u->data[i];
  }
  *v_ptr = v;
 
  return 0;
  }
 

Here we need to free the memory used by vector u. This is pretty simple since the deal.ii vector is stored inside the XBraid vector, so we just delete the XBraid vector u and it puts the deal.ii vector out of scope and releases its memory.

  int
  my_Free(braid_App app,
  braid_Vector u)
  {
  UNUSED(app);
  delete u;
 
  return 0;
  }
 

This is to perform an axpy type operation. That is to say we do \(y = \alpha x + \beta y\). Fortunately deal.ii already has this operation built in to its vector class, so we get the reference to the vector y and call the sadd method.

  int my_Sum(braid_App app,
  double alpha,
  braid_Vector x,
  double beta,
  braid_Vector y)
  {
  UNUSED(app);
  Vector<double>& vec = y->data;
  vec.sadd(beta, alpha, x->data);
 
  return 0;
  }
 
pointer data()

This calculates the spatial norm using the l2 norm. According to XBraid, this could be just about any spatial norm but we'll keep it simple and used deal.ii vector's built in l2_norm method.

  int
  my_SpatialNorm(braid_App app,
  braid_Vector u,
  double *norm_ptr)
  {
  UNUSED(app);
  double dot = 0.0;
  dot = u->data.l2_norm();
  *norm_ptr = dot;
 
  return 0;
  }
 

This function is called at various points depending on the access level specified when configuring the XBraid struct. This function is used to print out data during the run time, such as plots of the data. The status struct contains a ton of information about the simulation run. Here we get the current time and timestep number. The output_results function is called to plot the solution data. If the method of manufactured solutions is being used, then the error of this time step is computed and processed.

  int
  my_Access(braid_App app,
  braid_Vector u,
  braid_AccessStatus astatus)
  {
  double t;
  int index;
 
  braid_AccessStatusGetT(astatus, &t);
  braid_AccessStatusGetTIndex(astatus, &index);
 
  app->eq.output_results(index, t, u->data);
 
  #if DO_MFG
  if(index == app->final_step)
  {
  app->eq.process_solution(t, index, u->data);
  }
  #endif
 
  return 0;
  }
 

This calculates the size of buffer needed to pack the solution data into a linear buffer for transfer to another processor via MPI. We query the size of the data from the HeatEquation class and return the buffer size.

  int
  my_BufSize(braid_App app,
  int *size_ptr,
  braid_BufferStatus bstatus)
  {
  UNUSED(bstatus);
  int size = app->eq.size();
  *size_ptr = (size+1)*sizeof(double);
 
  return 0;
  }
 

This function packs a linear buffer with data so that the buffer may be sent to another processor via MPI. The buffer is cast to a type we can work with. The first element of the buffer is the size of the buffer. Then we iterate over solution vector u and fill the buffer with our solution data. Finally we tell XBraid how much data we wrote.

  int
  my_BufPack(braid_App app,
  braid_Vector u,
  void *buffer,
  braid_BufferStatus bstatus)
  {
 
  UNUSED(app);
  double *dbuffer = (double*)buffer;
  int size = u->data.size();
  dbuffer[0] = size;
  for(int i=0; i != size; ++i)
  {
  dbuffer[i+1] = (u->data)[i];
  }
  braid_BufferStatusSetSize(bstatus, (size+1)*sizeof(double));
 
  return 0;
  }
 

This function unpacks a buffer that was received from a different processor via MPI. The size of the buffer is read from the first element, then we iterate over the size of the buffer and fill the values of solution vector u with the data in the buffer.

  int
  my_BufUnpack(braid_App app,
  void *buffer,
  braid_Vector *u_ptr,
  braid_BufferStatus bstatus)
  {
  UNUSED(app);
  UNUSED(bstatus);
 
  my_Vector *u = NULL;
  double *dbuffer = (double*)buffer;
  int size = static_cast<int>(dbuffer[0]);
  u = new(my_Vector);
  u->data.reinit(size);
 
  for(int i = 0; i != size; ++i)
  {
  (u->data)[i] = dbuffer[i+1];
  }
  *u_ptr = u;
 
  return 0;
  }

Annotated version of src/BraidFuncs.hh

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #ifndef _BRAIDFUNCS_H_
  #define _BRAIDFUNCS_H_
 
 
 
 
  /*-------- Third Party --------*/
  #include <deal.II/numerics/vector_tools.h>
 
  #include <braid.h>
  #include <braid_test.h>
 
  /*-------- Project --------*/
  #include "HeatEquation.hh"
 

This struct contains all data that changes with time. For now this is just the solution data. When doing AMR this should probably include the triangulization, the sparsity pattern, constraints, etc.

 
  typedef struct _braid_Vector_struct
  {
  ::Vector<double> data;
  } my_Vector;
 

This struct contains all the data that is unchanging with time.

 
  typedef struct _braid_App_struct
  {
  HeatEquation<2> eq;
  int final_step;
  } my_App;
 
 
 
  int my_Step(braid_App app,
  braid_Vector ustop,
  braid_Vector fstop,
  braid_Vector u,
  braid_StepStatus status);
 
 
 
  int
  my_Init(braid_App app,
  double t,
  braid_Vector *u_ptr);
 
 
 
  int
  my_Clone(braid_App app,
  braid_Vector u,
  braid_Vector *v_ptr);
 
 
 
  int
  my_Free(braid_App app,
  braid_Vector u);
 
 
 
  int
  my_Sum(braid_App app,
  double alpha,
  braid_Vector x,
  double beta,
  braid_Vector y);
 
 
  int
  my_SpatialNorm(braid_App app,
  braid_Vector u,
  double *norm_ptr);
 
 
  int
  my_Access(braid_App app,
  braid_Vector u,
  braid_AccessStatus astatus);
 
 
  int
  my_BufSize(braid_App app,
  int *size_ptr,
  braid_BufferStatus bstatus);
 
 
  int
  my_BufPack(braid_App app,
  braid_Vector u,
  void *buffer,
  braid_BufferStatus bstatus);
 
 
  int
  my_BufUnpack(braid_App app,
  void *buffer,
  braid_Vector *u_ptr,
  braid_BufferStatus bstatus);
 
  #endif // _BRAIDFUNCS_H_

Annotated version of src/HeatEquation.hh

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #ifndef _HEATEQUATION_H_
  #define _HEATEQUATION_H_
 
  #include <deal.II/base/utilities.h>
  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/logstream.h>
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/precondition.h>
  #include <deal.II/lac/affine_constraints.h>
  #include <deal.II/grid/tria.h>
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/grid_refinement.h>
  #include <deal.II/grid/grid_out.h>
  #include <deal.II/grid/tria_accessor.h>
  #include <deal.II/grid/tria_iterator.h>
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_accessor.h>
  #include <deal.II/dofs/dof_tools.h>
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_values.h>
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/error_estimator.h>
  #include <deal.II/numerics/solution_transfer.h>
  #include <deal.II/numerics/matrix_tools.h>
  #include <deal.II/base/convergence_table.h>
 
  #include <fstream>
 
  using namespace dealii;
 

The HeatEquation class is describes the finite element solver for the heat equation. It contains all the functions needed to define the problem domain and advance the solution in time.

  template <int dim>
  class HeatEquation
  {
  public:
  HeatEquation();
  void define();
  void step(Vector<double>& braid_data,
  double deltaT,
  double a_time,
  int a_time_idx);
 
  int size() const;
 
  void output_results(int a_time_idx,
  double a_time,
  Vector<double>& a_solution) const;
 
  void initialize(double a_time,
  Vector<double>& a_vector) const;
 
  void process_solution(double a_time,
  int a_index,
  const Vector<double>& a_vector);
 
  private:
  void setup_system();
  void solve_time_step(Vector<double>& a_solution);
 
  FE_Q<dim> fe;
  DoFHandler<dim> dof_handler;
 
 
  SparsityPattern sparsity_pattern;
  SparseMatrix<double> mass_matrix;
  SparseMatrix<double> laplace_matrix;
  SparseMatrix<double> system_matrix;
 
  Vector<double> system_rhs;
 
  std::ofstream myfile;
 
  const double theta;
 
Definition fe_q.h:554
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

These were originally in the run() function but because I am splitting the run() function up into define and step they need to become member data

  Vector<double> forcing_terms;
 
  ConvergenceTable convergence_table;
  };
 

The RightHandSide class describes the RHS of the governing equations. In this case, it is the forcing function.

  template <int dim>
  class RightHandSide : public Function<dim>
  {
  public:
  RightHandSide ()
  :
  Function<dim>(),
  period (0.2)
  {}
 
  virtual double value (const Point<dim> &p,
  const unsigned int component = 0) const override;
 
  private:
  const double period;
  };
 
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
Definition point.h:111

The BoundaryValues class describes the boundary conditions of the governing equations.

  template <int dim>
  class BoundaryValues : public Function<dim>
  {
  public:
  virtual double value (const Point<dim> &p,
  const unsigned int component = 0) const override;
  };
 

The RightHandSideMFG class describes the right hand side function when doing the method of manufactured solutions.

  template <int dim>
  class RightHandSideMFG : public Function<dim>
  {
  public:
  virtual double value (const Point<dim> &p,
  const unsigned int component = 0) const override;
  };
 

The InitialValuesMFG class describes the initial values when doing the method of manufactured solutions.

  template <int dim>
  class InitialValuesMFG : public Function<dim>
  {
  public:
  virtual double value (const Point<dim> &p,
  const unsigned int component = 0) const override;
  };
 

Provides the exact value for the manufactured solution. This is used for the boundary conditions as well.

  template <int dim>
  class ExactValuesMFG : public Function<dim>
  {
  public:
 
  virtual double value (const Point<dim> &p,
  const unsigned int component = 0) const override;
 
 
  virtual Tensor<1,dim> gradient (const Point<dim> &p,
  const unsigned int component = 0) const override;
  };
 
 
  #include "HeatEquationImplem.hh"
 
  #endif // _HEATEQUATION_H_
virtual Tensor< 1, dim, RangeNumberType > gradient(const Point< dim > &p, const unsigned int component=0) const

Annotated version of src/HeatEquationImplem.hh

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #include "Utilities.hh"
 
  #include <iomanip>
  #include <math.h>
 

Calculates the forcing function for the RightHandSide. See the documentation for the math.

  template <int dim>
  double RightHandSide<dim>::value (const Point<dim> &p,
  const unsigned int component) const
  {
  (void) component;
  Assert (component == 0, ExcIndexRange(component, 0, 1));
  Assert (dim == 2, ExcNotImplemented());
 
  double time = this->get_time();
 
  if ((p[0] > 0.5) && (p[1] > -0.5))
  {
  return std::exp(-0.5*(time-0.125)*(time-0.125)/(0.005));
  }
  else if ((p[0] > -0.5) && (p[1] > 0.5))
  {
  return std::exp(-0.5*(time-0.375)*(time-0.375)/(0.005));
  }
  else
  {
  return 0;
  }
 
  return 0; // No forcing function
  }
 
#define Assert(cond, exc)
::VectorizedArray< Number, width > exp(const ::VectorizedArray< Number, width > &)

Calculates the forcing function for the method of manufactured solutions. See the documentation for the math.

  template <int dim>
  double RightHandSideMFG<dim>::value (const Point<dim> &p,
  const unsigned int component) const
  {
  (void) component;
  Assert (component == 0, ExcIndexRange(component, 0, 1));
  Assert (dim == 2, ExcNotImplemented());
 
  double time = this->get_time();
 
  double pi = numbers::PI;
  return 4*pi*pi*std::exp(-4*pi*pi*time)*std::cos(2*pi*p[0])*std::cos(2*pi*p[1]);
  }
 
static constexpr double PI
Definition numbers.h:259
::VectorizedArray< Number, width > cos(const ::VectorizedArray< Number, width > &)

Calculates the boundary conditions, essentially zero everywhere.

  template <int dim>
  double BoundaryValues<dim>::value (const Point<dim> &p,
  const unsigned int component) const
  {
  UNUSED(p);
  (void) component;
  Assert (component == 0, ExcIndexRange(component, 0, 1));
  return 0;
  }
 

Calculates the exact solution (and thus also boundary conditions) for the method of manufactured solutions.

  template <int dim>
  double ExactValuesMFG<dim>::value (const Point<dim> &p,
  const unsigned int component) const
  {
  (void) component;
  Assert (component == 0, ExcIndexRange(component, 0, 1));
 
  double time = this->get_time();
  const double pi = numbers::PI;
 
  return std::exp(-4*pi*pi*time)*std::cos(2*pi*p[0])*std::cos(2*pi*p[1]);
  }
 

Calculates the gradient of the exact solution for the method of manufactured solutions. See the documentation for the math.

  template <int dim>
  Tensor<1,dim> ExactValuesMFG<dim>::gradient (const Point<dim> &p,
  const unsigned int) const
  {
  Assert (dim == 2, ExcNotImplemented());
 
  Tensor<1,dim> return_value;
  const double pi = numbers::PI;
  double time = this->get_time();
  return_value[0] = -2*pi*std::exp(-4*pi*pi*time)*std::cos(2*pi*p[1])*std::sin(2*pi*p[0]);
  return_value[1] = -2*pi*std::exp(-4*pi*pi*time)*std::cos(2*pi*p[0])*std::sin(2*pi*p[1]);
  return return_value;
  }
 
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)

Calculates the initial values for the method of manufactured solutions. See the documentation for the math.

  template <int dim>
  double InitialValuesMFG<dim>::value (const Point<dim> &p,
  const unsigned int component) const
  {
  (void) component;
  Assert (component == 0, ExcIndexRange(component, 0, 1));
  const double pi = numbers::PI;
 
  return std::cos(2*pi*p[0])*std::cos(2*pi*p[1]);
  }
 
  template <int dim>
  HeatEquation<dim>::HeatEquation ()
  :
  fe(1),
  dof_handler(triangulation),
  theta(0.5)
  {
  }
 
  template <int dim>
  void HeatEquation<dim>::initialize(double a_time,
  Vector<double>& a_vector) const
  {
  #if DO_MFG

We only initialize values in the manufactured solution case

  InitialValuesMFG<dim> iv_function;
  iv_function.set_time(a_time);
  VectorTools::project (dof_handler, constraints,
  QGauss<dim>(fe.degree+1), iv_function,
  a_vector);
  #else
  UNUSED(a_time);
  UNUSED(a_vector);
  #endif // DO_MFG
void project(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const AffineConstraints< typename VectorType::value_type > &constraints, const Quadrature< dim > &quadrature, const Function< spacedim, typename VectorType::value_type > &function, VectorType &vec, const bool enforce_zero_boundary=false, const Quadrature< dim - 1 > &q_boundary=(dim > 1 ? QGauss< dim - 1 >(2) :Quadrature< dim - 1 >()), const bool project_to_boundary_first=false)

If not the MFG solution case, a_vector is already zero'd so do nothing

  }
 
  template <int dim>
  void HeatEquation<dim>::setup_system()
  {
  dof_handler.distribute_dofs(fe);
 
  constraints.clear ();
  constraints);
  constraints.close();
 
  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  dsp,
  constraints,
  /*keep_constrained_dofs = */ true);
  sparsity_pattern.copy_from(dsp);
 
  mass_matrix.reinit(sparsity_pattern);
  laplace_matrix.reinit(sparsity_pattern);
  system_matrix.reinit(sparsity_pattern);
 
  QGauss<dim>(fe.degree+1),
  mass_matrix);
  QGauss<dim>(fe.degree+1),
  laplace_matrix);
 
  system_rhs.reinit(dof_handler.n_dofs());
  }
 
 
  template <int dim>
  void HeatEquation<dim>::solve_time_step(Vector<double>& a_solution)
  {
  SolverControl solver_control(1000, 1e-8 * system_rhs.l2_norm());
  SolverCG<> cg(solver_control);
 
  PreconditionSSOR<> preconditioner;
  preconditioner.initialize(system_matrix, 1.0);
 
  cg.solve(system_matrix, a_solution, system_rhs,
  preconditioner);
 
  constraints.distribute(a_solution);
  }
 
 
 
  template <int dim>
  void HeatEquation<dim>::output_results(int a_time_idx,
  double a_time,
  Vector<double>& a_solution) const
  {
 
  DataOutBase::VtkFlags vtk_flags;
  vtk_flags.time = a_time;
  vtk_flags.cycle = a_time_idx;
 
  DataOut<dim> data_out;
  data_out.set_flags(vtk_flags);
 
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(a_solution, "U");
 
  data_out.build_patches();
 
  const std::string filename = "solution-"
  + Utilities::int_to_string(a_time_idx, 3) +
  ".vtk";
  std::ofstream output(filename.c_str());
  data_out.write_vtk(output);
  }
 
void set_flags(const FlagType &flags)
void initialize(const MatrixType &A, const AdditionalData &parameters=AdditionalData())
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
void create_mass_matrix(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, MatrixType &matrix, const Function< spacedim, typename MatrixType::value_type > *const a=nullptr, const AffineConstraints< typename MatrixType::value_type > &constraints=AffineConstraints< typename MatrixType::value_type >())
void create_laplace_matrix(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, MatrixType &matrix, const Function< spacedim, typename MatrixType::value_type > *const a=nullptr, const AffineConstraints< typename MatrixType::value_type > &constraints=AffineConstraints< typename MatrixType::value_type >())
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition utilities.cc:470

We define the geometry here, this is called on each processor and doesn't change in time. Once doing AMR, this won't need to exist anymore.

  template <int dim>
  void HeatEquation<dim>::define()
  {
  const unsigned int initial_global_refinement = 6;
 
  triangulation.refine_global (initial_global_refinement);
 
  setup_system();
 
  tmp.reinit (dof_handler.n_dofs());
  forcing_terms.reinit (dof_handler.n_dofs());
  }
 
void refine_global(const unsigned int times=1)
void hyper_L(Triangulation< dim > &tria, const double left=-1., const double right=1., const bool colorize=false)

Here we advance the solution forward in time. This is done the same way as in the loop in step-26's run function.

  template<int dim>
  void HeatEquation<dim>::step(Vector<double>& braid_data,
  double deltaT,
  double a_time,
  int a_time_idx)
  {
  a_time += deltaT;
  ++a_time_idx;
 
  mass_matrix.vmult(system_rhs, braid_data);
 
  laplace_matrix.vmult(tmp, braid_data);
 
  system_rhs.add(-(1 - theta) * deltaT, tmp);
 
  #if DO_MFG
  RightHandSideMFG<dim> rhs_function;
  #else
  RightHandSide<dim> rhs_function;
  #endif
  rhs_function.set_time(a_time);
  QGauss<dim>(fe.degree+1),
  rhs_function,
  tmp);
 
  forcing_terms = tmp;
  forcing_terms *= deltaT * theta;
 
  rhs_function.set_time(a_time - deltaT);
  QGauss<dim>(fe.degree+1),
  rhs_function,
  tmp);
 
  forcing_terms.add(deltaT * (1 - theta), tmp);
  system_rhs += forcing_terms;
 
  system_matrix.copy_from(mass_matrix);
  system_matrix.add(theta * deltaT, laplace_matrix);
 
  constraints.condense (system_matrix, system_rhs);
 
  {
  #if DO_MFG
void create_right_hand_side(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim > &q, const Function< spacedim, typename VectorType::value_type > &rhs, VectorType &rhs_vector, const AffineConstraints< typename VectorType::value_type > &constraints=AffineConstraints< typename VectorType::value_type >())

If we are doing the method of manufactured solutions then we set the boundary conditions to the exact solution. Otherwise the boundary conditions are zero.

  ExactValuesMFG<dim> boundary_values_function;
  #else
  BoundaryValues<dim> boundary_values_function;
  #endif
  boundary_values_function.set_time(a_time);
 
  std::map<types::global_dof_index, double> boundary_values;
  0,
  boundary_values_function,
  boundary_values);
 
  system_matrix,
  braid_data,
  system_rhs);
  }
 
  solve_time_step(braid_data);
  }
 
  template<int dim>
  int HeatEquation<dim>::size() const
  {
  return dof_handler.n_dofs();
  }
 
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

This function computes the error for the time step when doing the method of manufactured solutions. First the exact values is calculated, then the difference per cell is computed for the various norms, and the error is computed. This is written out to a pretty table.

  template<int dim> void
  HeatEquation<dim>::process_solution(double a_time,
  int a_index,
  const Vector<double>& a_vector)
  {

Compute the exact value for the manufactured solution case

  ExactValuesMFG<dim> exact_function;
  exact_function.set_time(a_time);
 
  Vector<double> difference_per_cell (triangulation.n_active_cells());
  a_vector,
  exact_function,
  difference_per_cell,
  QGauss<dim>(fe.degree+1),
 
  difference_per_cell,
 
  a_vector,
  exact_function,
  difference_per_cell,
  QGauss<dim>(fe.degree+1),
 
  difference_per_cell,
 
  const QTrapezoid<1> q_trapez;
  const QIterated<dim> q_iterated (q_trapez, 5);
  a_vector,
  exact_function,
  difference_per_cell,
  q_iterated,
  const double Linfty_error = VectorTools::compute_global_error(triangulation,
  difference_per_cell,
 
  const unsigned int n_active_cells = triangulation.n_active_cells();
  const unsigned int n_dofs = dof_handler.n_dofs();
 
  pout() << "Cycle " << a_index << ':'
  << std::endl
  << " Number of active cells: "
  << n_active_cells
  << std::endl
  << " Number of degrees of freedom: "
  << n_dofs
  << std::endl;
 
  convergence_table.add_value("cycle", a_index);
  convergence_table.add_value("cells", n_active_cells);
  convergence_table.add_value("dofs", n_dofs);
  convergence_table.add_value("L2", L2_error);
  convergence_table.add_value("H1", H1_error);
  convergence_table.add_value("Linfty", Linfty_error);
 
  convergence_table.set_precision("L2", 3);
  convergence_table.set_precision("H1", 3);
  convergence_table.set_precision("Linfty", 3);
 
  convergence_table.set_scientific("L2", true);
  convergence_table.set_scientific("H1", true);
  convergence_table.set_scientific("Linfty", true);
 
  convergence_table.set_tex_caption("cells", "\\# cells");
  convergence_table.set_tex_caption("dofs", "\\# dofs");
  convergence_table.set_tex_caption("L2", "@fL^2@f-error");
  convergence_table.set_tex_caption("H1", "@fH^1@f-error");
  convergence_table.set_tex_caption("Linfty", "@fL^\\infty@f-error");
 
  convergence_table.set_tex_format("cells", "r");
  convergence_table.set_tex_format("dofs", "r");
 
  std::cout << std::endl;
  convergence_table.write_text(std::cout);
 
  std::ofstream error_table_file("tex-conv-table.tex");
  convergence_table.write_tex(error_table_file);
  }
unsigned int n_active_cells() const
double compute_global_error(const Triangulation< dim, spacedim > &tria, const InVector &cellwise_error, const NormType &norm, const double exponent=2.)
void integrate_difference(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const ReadVector< Number > &fe_function, const Function< spacedim, Number > &exact_solution, OutVector &difference, const Quadrature< dim > &q, const NormType &norm, const Function< spacedim, double > *weight=nullptr, const double exponent=2.)

Annotated version of src/Utilities.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #include "Utilities.hh"
 
  #include <string>
  #include <fstream>
 
  #include <mpi.h>
 
  int procID = 0;
 
  #ifdef USE_MPI

The shared variables

  static std::string s_pout_filename ;
  static std::string s_pout_basename ;
  static std::ofstream s_pout ;
 
  static bool s_pout_init = false ;
  static bool s_pout_open = false ;
 

in parallel, compute the filename give the basename [NOTE: dont call this before MPI is initialized.]

  static void setFileName()
  {
  static const size_t ProcnumSize = 1 + 10 + 1 ; //'.' + 10digits + '\0'
  char procnum[ProcnumSize] ;
  snprintf( procnum ,ProcnumSize ,".%d" ,procID);
  s_pout_filename = s_pout_basename + procnum ;
  }
 

in parallel, close the file if nec., open it and check for success

  static void openFile()
  {
  if ( s_pout_open )
  {
  s_pout.close();
  }
  s_pout.open( s_pout_filename.c_str() );

if open() fails, we have problems, but it's better to try again later than to make believe it succeeded

  s_pout_open = (bool)s_pout ;
  }
  #endif
 
  std::ostream& pout()
  {
  #ifdef USE_MPI

the common case is _open == true, which just returns s_pout

  if ( ! s_pout_open )
  {

the uncommon cae: the file isn't opened, MPI may not be initialized, and the basename may not have been set

  int flag_i, flag_f;
  MPI_Initialized(&flag_i);
  MPI_Finalized(&flag_f);

app hasn't set a basename yet, so set the default

  if ( ! s_pout_init )
  {
  s_pout_basename = "pout" ;
  s_pout_init = true ;
  }

if MPI not initialized, we can't open the file so return cout

  if ( ! flag_i || flag_f)
  {
  return std::cout; // MPI hasn't been started yet, or has ended....
  }

MPI is initialized, so file must not be, so open it

  setFileName() ;
  openFile() ;

finally, in case the open failed, return cout

  if ( ! s_pout_open )
  {
  return std::cout ;
  }
  }
  return s_pout ;
  #else
  return std::cout;
  #endif
  }

Annotated version of src/Utilities.hh

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #ifndef _UTILITIES_H_
  #define _UTILITIES_H_
 
  #include <iostream>
 

This preprocessor macro is used on function arguments that are not used in the function. It is used to suppress compiler warnings.

  #define UNUSED(x) (void)(x)
 

Contains the current MPI processor ID.

  extern int procID;
 

Function to return the ostream to write out to. In MPI mode it returns a stream to a file named pout.<#> where <#> is the procID. This allows the user to write output from each processor to a separate file. In serial mode (no MPI), it returns the standard output.

  std::ostream& pout();
  #endif

Annotated version of src/parallel_in_time.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   *
*   * Author: Joshua Christopher, Colorado State University, 2018
*   */
 
  #include "BraidFuncs.hh"
  #include "HeatEquation.hh"
  #include "Utilities.hh"
 
  #include <fstream>
  #include <iostream>
 
  int main(int argc, char *argv[])
  {
  try
  {
  using namespace dealii;
 
  /* Initialize MPI */
  MPI_Comm comm; //, comm_x, comm_t;
  int rank;
  MPI_Init(&argc, &argv);
  comm = MPI_COMM_WORLD;
  MPI_Comm_rank(comm, &rank);
  procID = rank;
 
*braid_SplitCommworld & comm

Set up X-Braid

  /* Initialize Braid */
  braid_Core core;
  double tstart = 0.0;
  double tstop = 0.002;
  int ntime = 10;
  my_App *app = new(my_App);
 
  braid_Init(MPI_COMM_WORLD, comm, tstart, tstop, ntime, app,
  my_Step, my_Init, my_Clone, my_Free, my_Sum, my_SpatialNorm,
  my_Access, my_BufSize, my_BufPack, my_BufUnpack, &core);
 
  /* Define XBraid parameters
*   * See -help message forf descriptions */
  int max_levels = 3;

int nrelax = 1; int skip = 0;

  double tol = 1.e-7;

int cfactor = 2;

  int max_iter = 5;

int min_coarse = 10; int fmg = 0; int scoarsen = 0; int res = 0; int wrapper_tests = 0;

  int print_level = 1;
  int access_level = 1;
  int use_sequential= 0;
 
  braid_SetPrintLevel( core, print_level);
  braid_SetAccessLevel( core, access_level);
  braid_SetMaxLevels(core, max_levels);

braid_SetMinCoarse( core, min_coarse ); braid_SetSkip(core, skip); braid_SetNRelax(core, -1, nrelax);

  braid_SetAbsTol(core, tol);

braid_SetCFactor(core, -1, cfactor);

  braid_SetMaxIter(core, max_iter);
  braid_SetSeqSoln(core, use_sequential);
 
  app->eq.define();
  app->final_step = ntime;
 
  braid_Drive(core);
 

Free the memory now that we are done

  braid_Destroy(core);
 
  delete app;
 

Clean up MPI MPI_Comm_free(&comm);

  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl << exc.what()
  << std::endl << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl << "Aborting!"
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
 
  return 0;
  }
 
****code *  *  MPI_Finalize()

Annotated version of test/test_braid.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2018 by Joshua Christopher
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 
  #include "BraidFuncs.hh"
 
  #include <braid.h>
  #include <braid_test.h>
 
  #include <iostream>
 
  int main(int argc, char** argv)
  {
  int rank;
  MPI_Init(&argc, &argv);
  comm = MPI_COMM_WORLD;
  MPI_Comm_rank(comm, &rank);
 
  my_App *app = new(my_App);
  app->eq.define();
 
  double time = 0.2;
 
  braid_Int init_access_result = braid_TestInitAccess(app,
  comm,
  stdout,
  time,
  my_Init,
  my_Access,
  my_Free);
  (void)init_access_result;
 
  braid_Int clone_result = braid_TestClone(app,
  comm,
  stdout,
  time,
  my_Init,
  my_Access,
  my_Free,
  my_Clone);
  (void)clone_result;
 
  braid_Int sum_result = braid_TestSum(app,
  comm,
  stdout,
  time,
  my_Init,
  my_Access,
  my_Free,
  my_Clone,
  my_Sum);
  (void)sum_result;
 
  braid_Int norm_result = braid_TestSpatialNorm(app,
  comm,
  stdout,
  time,
  my_Init,
  my_Free,
  my_Clone,
  my_Sum,
  my_SpatialNorm);
  (void)norm_result;
 
  braid_Int buf_result = braid_TestBuf(app,
  comm,
  stdout,
  time,
  my_Init,
  my_Free,
  my_Sum,
  my_SpatialNorm,
  my_BufSize,
  my_BufPack,
  my_BufUnpack);
  (void)buf_result;
 

/* Create spatial communicator for wrapper-tests