deal.II version GIT relicensing-2287-g6548a49e0a 2024-12-20 18:30:00+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-86 tutorial program

This tutorial depends on step-26, step-40.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

This program was contributed by Wolfgang Bangerth (Colorado State University), Stefano Zampini (King Abdullah University of Science and Technology), and Luca Heltai (University of Pisa).

This material is based upon work partially supported by National Science Foundation grants OAC-1835673, DMS-1821210, and EAR-1925595.

Introduction

step-26 solved the simple heat equation, one of the prototypical examples of time dependent problems:

\begin{align*} \frac{\partial u(\mathbf x, t)}{\partial t} - \Delta u(\mathbf x, t) &= f(\mathbf x, t), \qquad\qquad && \forall \mathbf x \in \Omega, t\in (0,T), \\ u(\mathbf x, 0) &= u_0(\mathbf x) && \forall \mathbf x \in \Omega, \\ u(\mathbf x, t) &= g(\mathbf x,t) && \forall \mathbf x \in \partial\Omega, t \in (0,T). \end{align*}

While that program showed a number of advanced techniques such as using adaptive mesh refinement for time-dependent problems, it did not address one big issue: It hand-rolls its own time stepping scheme, which in that program is the simple Crank-Nicolson method with a fixed time step. This is neither accurate nor efficient: We should be using a higher-order time stepping algorithm, and we should use one of the many ways to efficiently and automatically choose the length of the time step in response to the accuracy obtained.

This would of course require quite a lot of development effort – unless, of course, you do what we always advise: You build on what others have already done and have likely done in a way far superior to what one can do by oneself. In the current case, deal.II has interfaces to two such libraries: SUNDIALS, the SUite of Nonlinear and DIfferential/ALgebraic equation Solvers (and here specifically the Runge-Kutta-type solvers wrapped in the SUNDIALS::ARKode class), and PETSc's TS sub-package (wrapped in the PETScWrappers::TimeStepper class). In this program, we will use the PETSc TS interfaces.

While we're at it with updating step-26, we will also make the program run in parallel – a minor change if you've read step-40, for example.

Mapping the heat equation onto an ordinary differential equation formulation

Both the PETSc TS and the SUNDIALS interfaces require that we first write the partial differential equation in the form of a system of ordinary differential equations. To this end, let us turn around the approach we used in step-26. There, we first discretized in time, obtaining a PDE to be solved at each time step that we could then discretize using the finite element method. This approach is called the "Rothe method". Instead, here, we use what's called the "Method of Lines" where we first discretize in space, obtaining a system of ordinary differential equations to which we can apply traditional time steppers. (There are some trade-offs between these two strategies, principally around using dynamically changing meshes; we will get back to this issue later on.)

To get this started, we take the equation above and multiply it by a test function \(\varphi(\mathbf x)\) and integrate by parts to get a weak form: We seek a function \(u(\mathbf x, t)\) that for all test functions \(\varphi \in H^1_0(\Omega)\) satisfies

\begin{align*} \left(\varphi(\mathbf x), \frac{\partial u(\mathbf x, t)}{\partial t} \right)_\Omega + \left(\nabla \varphi(\mathbf x), \nabla u(\mathbf x, t) \right)_\Omega &= \left(\varphi(\mathbf x), f(\mathbf x, t) \right)_\Omega, \\ \left(\varphi(\mathbf x), u(\mathbf x, 0)\right)_\Omega &= \left(\varphi(\mathbf x), u_0(\mathbf x)\right)_\Omega, && \\ u(\mathbf x, t) &= g(\mathbf x,t) && \forall \mathbf x \in \partial\Omega, t \in (0,T). \end{align*}

(Integration by parts would ordinarily result in boundary terms unless one has Dirichlet boundary conditions – possibly nonzero – all around the boundary. We will assume that this is indeed so herein.)

We then discretize by restricting ourself to finite element functions of the form

\begin{align*} u_h(\mathbf x,t) = \sum_j U_j(t) \varphi_j(\mathbf x), \end{align*}

which leads to the problem of finding a function \(u_h(\mathbf x, t)\) that for all discrete test functions \(\varphi \in V_h(\Omega)\) satisfies

\begin{align*} \left(\varphi_i(\mathbf x), \frac{\partial u_h(\mathbf x, t)}{\partial t} \right)_\Omega + \left(\nabla \varphi_j(\mathbf x), \nabla u_h(\mathbf x, t) \right)_\Omega &= \left(\varphi_i(\mathbf x), f(\mathbf x, t) \right)_\Omega, \\ \left(\varphi_i(\mathbf x), u_h(\mathbf x, 0)\right)_\Omega &= \left(\varphi_i(\mathbf x), u_0(\mathbf x)\right)_\Omega, && \\ u_h(\mathbf x, t) &= g_h(\mathbf x,t) && \forall \mathbf x \in \partial\Omega, t \in (0,T), \end{align*}

where \(g_h\) is an interpolant of the function \(g\) on the boundary.

This equation can be rewritten in matrix form in the usual way, by expanding \(u_h\) into its coefficients times shape function form, pulling the sum over \(j\) out of the integrals, and then considering that choosing test function \(\varphi_i\) leads to the \(i\)th row of the linear system. This then gives us

\begin{align*} M \frac{\partial U(t)}{\partial t} + AU(t) &= F(t), \\ U(0) = U_0, \end{align*}

plus appropriate boundary conditions.

There are now two perspectives on how one should proceed. If we were to use the SUNDIALS::ARKode wrappers to solve this linear system, we would bring the \(AU\) term to the right hand side and consider the ODE

\begin{align*} M \frac{\partial U(t)}{\partial t} &= - AU(t) + F(t), \end{align*}

which matches the form stated in the documentation of SUNDIALS::ARKode. In particular, ARKode is able to deal with the fact that the time derivative is multiplied by the mass matrix \(M\), which is always there when using finite elements.

On the other hand, when using the PETScWrappers::TimeStepper class, we can solve ODEs that are stated in a general "implicit" form, and in that case we simply bring everything to the left hand side and obtain

\begin{align*} \underbrace{ M \frac{\partial U(t)}{\partial t} + AU(t) - F(t) }_{=:R(t,U,\dot U)} = 0. \end{align*}

This matches the form \(R(t,U,\dot U) = 0\) you can find in the documentation of PETScWrappers::TimeStepper if you identify the time dependent function \(y=y(t)\) used there with our solution vector \(U(t)\), and our notation \(R(t,U,\dot U)\) instead of the \(F(t,y,\dot y)\) used there and which we rename because we want to use \(F\) as the right hand side vector of the ODE indicating forcing terms.

This program uses the PETScWrappers::TimeStepper class, and so we will take the latter viewpoint. (It is worth noting that SUNDIALS also has a package that can solve ODEs in implicit form, wrapped by the SUNDIALS::IDA class.) In what follows, we will continue to use \(U(t)\) as the function we seek, even though the documentation of the class uses \(y(t)\).

Mapping the differential equation formulation to the time stepper

Having identified how we want to see the problem (namely, as an "implicit" ODE), the question is how we describe the problem to the time stepper. Conceptually, all of the wrappers for time stepping packages we support in deal.II only requires us to provide them with a very limited set of operations. Specifically, for the implicit formulation used by PETScWrappers::TimeStepper, all we need to implement are functions that provide the following:

  • A way, for a given \(t,U,\dot U\), to evaluate the residual vector \(R(t,U,\dot U)\).
  • A way, for a given \(t,U,\dot U, \alpha\), to set up a matrix \(J := \dfrac{\partial R}{\partial y} + \alpha \dfrac{\partial R}{\partial \dot y}\). This is often called the "Jacobian" of the implicit function \(R\), perhaps with a small abuse of terminology. In the current case, this matrix is \(J=A + \alpha M\). If you have read through step-26, it is probably not lost on you that this matrix appears prominently there as well – with \(\alpha\) being a multiple of the inverse of the time step (which there we had denoted by \(k_n\)). Importantly, for the linear problem we consider here, \(J\) is a linear combination of matrices that do not depend on \(U\).
  • A way to solve a linear system with this matrix \(J\).

That's really it. If we can provide these three functions, PETSc will do the rest (as would, for example, SUNDIALS::ARKode or, if you prefer the implicit form, SUNDIALS::IDA). It will not be very difficult to set these things up. In practice, the way this will work is that inside the run() function, we will set up lambda functions that can access the information of the surrounding scopes and that return the requested information.

In practice, we often want to provide a fourth function:

  • A callback that is called at the end of each time step and that is provided with the current solution and other information that can be used to "monitor" the progress of computations. One of the ways in which this can be used is to output visualization data every few time steps.

Complication 1: Dirichlet boundary values

While we like to say that all nodes in a finite element mesh are "degrees of freedom", this is not actually true if we have Dirichlet boundary conditions: Degrees of "freedom" located along Dirichlet boundaries have specific values that are prescribed by the boundary conditions and are, consequently, not "free". Moreover, while the form

\begin{align*} M \frac{\partial U(t)}{\partial t} &= - AU(t) + F(t) \end{align*}

suggests that all elements of the vector \(U(t)\) satisfy a differential equation, this is not actually true for those components of \(U\) that correspond to boundary nodes; rather, their values are prescribed and will in general not satisfy the equation. On second thought, you will also find that the same sort of issue happens as well with handing nodes: These, too, are not really "free" but constrained to values that are derived from the values of neighboring nodes.

For the same reason as we will also discuss in the next section, all of this is easier using the Rothe method we have used in all previous tutorials. There, we end up with a PDE at every time step for which we can independently prescribe boundary conditions and hanging node constraints, and we then deal with those by modifying the matrix and right hand side appropriately. Here, with the method of lines, things are slightly more complicated.

Not too complicated, however: Like with the mesh refinement issues of the next section, Dirichlet boundary conditions (and constrained degrees of freedom in general) are something every PDE solver has to deal with, and because the people who write ODE solvers also have PDEs in mind, they needed to address these cases too and so the interfaces we use are prepared for it. Specifically, what we need to do is mark which entries of the solution vector (i.e., which degrees of freedom) are "algebraic" – that is, satisfy an algebraic, rather than a differential, equation. The way we will do this is that the ODE integrator interface requires us to provide a "callback" function that it can call and that needs to return an IndexSet object in which all the algebraic degrees of freedom are listed. At the end of each solution stage, a second callback then needs to provide the ability to take a solution vector and set its constrained (algebraic) entries to their correct values; for us, that will mean setting boundary values and hanging node constraints correctly.

(We note that when dealing with Differential-Algebraic Equations (DAEs), the algebraic components identified by the first callback mentioned above contain hanging nodes, Dirichlet boundary nodes, and degrees of freedom for which the equation does not provide time derivatives. This is not of concern to us here, and so we will not dwell on the issue – or the fact that the second callback in the case of DAEs should set only a subset of the algebraic components – and instead refer to the examples shown in the SUNDIALS::IDA class documentation.)

Complication 2: Mesh refinement

When stating an ODE in the form

\begin{align*} M \frac{\partial U(t)}{\partial t} &= - AU(t) + F(t), \end{align*}

or one of the reformulations discussed above, there is an implicit assumption that the number of entries in the vector \(U\) stays constant and that each entry continues to correspond to the same quantity. But if you use mesh refinement, this is not the case: The number of unknowns will go up or down whenever you refine or coarsen the mesh, and the 42nd (or any other) degree of freedom may be located at an entirely different physical location after mesh refinement than where it was located below. In other words, the size of vectors and what individual vector entries mean changes when we do mesh refinement. The ODE form we derived above after spatial discretization simply ceases to be meaningful at these times.

This was precisely why, in all previous time-dependent tutorial programs, we have adopted the Rothe approach. There, one first discretizes in time, and obtains a PDE at each time step. This PDE can then be discretized anew – if one really wanted to, with an entirely different mesh in each time step, though we typically don't go that far. On the other hand, being able to use external ODE integrators is* undoubtedly very useful, and so let us see if we can shoehorn the mesh refinement complication into what external ODE integrators do. This is, in practice, not as difficult as it may at first sound because, perhaps not surprisingly, ODE integrators are written by people who want to solve problems like ours, and so they have had to deal with the same kinds of complications we are discussing here.

The way we approach the situation from a conceptual perspective is that we break things into "time slabs". Let's say we want to solve on the time interval \([0,T]\), then we break things into slabs \([0=\tau_0,\tau_1], [\tau_1,\tau_2], \ldots [\tau_{n-1},\tau_n=T]\) where the break points satisfy \(\tau_{k-1}<\tau_k\). On each time slab, we keep the mesh the same, and so we can call into our time integrator. At the end of a time slab, we then save the solution, refine the mesh, set up other data structures, and restore the solution on the new mesh; then we start the time integrator again at the start of the new time slab. This approach guarantees that for the purposes of ODE solvers, we really only ever give them something that can rightfully be considered an ODE system. A disadvantage is that we typically want to refine or coarsen the mesh relatively frequently (in large-scale codes one often chooses to refine and coarsen the mesh every 10-20 time steps), and that limits the efficiency of time integrators: They gain much of their advantage from being able to choose the time step length automatically, but there is often a cost associated with starting up; if the slabs are too short, then neither the start-up cost nor the benefit of potentially long time steps are realized.

In practice, good integrators such as those in PETSc TS can deal with this transparently. We just have to give them a way to call back into our code at the end of each time step to ask whether we want to refine the mesh and do some prep work; and a second function that the integrator can then call to do the actual refinement and interpolate solution vectors from old to new mesh. You will see in the run() function that none of this is difficult to explain to the ODE integrator.

Structure of the code

Compared to the code structure of step-26, the program the current tutorial is principally based on, there are essentially two sets of changes:

  • The program runs in parallel via MPI. This may, at first, seem like a major change, but the reality is that it does not show up in a very large number of places. If you compare step-6 (a sequential Laplace solver) with step-40 (its parallel version), you will see that it takes maybe 20 or 30 extra lines of code to make a simple code run in parallel. These are principally related to keeping track which cells and degrees of freedom are locally owned, or are on ghost cells. We will also have to use matrix and vector data types that are MPI-aware, and we will get those from the PETScWrappers namespace given that we are already using PETSc for the time stepping.
  • In step-26 (and most other tutorials), the logic that drives the program's execution lives in the run() function: You can see the loop over all time steps, and which functions are called where within the loop. Here, however, this is no longer the case. In essence, in run(), we create an object of type PETScWrappers::TimeStepper, and after some set-up, we turn over control to that object's PETScWrappers::TimeStepper::solve() function that contains the loop over time steps and the logic that decides how large the time step needs to be, what needs to happen when, etc. In other words, the details of the program's logic are no longer visible. Instead, what you have to provide to the PETScWrappers::TimeStepper object is a series of "callbacks": Functions that the time stepper can call whenever appropriate. These callbacks are typically small lambda functions that, if the functionality required only takes a few lines of code do exactly that or, otherwise, call larger member functions of the main class.

The test case

The program solves the heat equation, which with all right hand sides, initial, and boundary values reads as

\begin{align*} \frac{\partial u(\mathbf x, t)}{\partial t} - \Delta u(\mathbf x, t) &= f(\mathbf x, t), \qquad\qquad && \forall \mathbf x \in \Omega, t\in (0,T), \\ u(\mathbf x, 0) &= u_0(\mathbf x) && \forall \mathbf x \in \Omega, \\ u(\mathbf x, t) &= g(\mathbf x,t) && \forall \mathbf x \in \partial\Omega, t \in (0,T). \end{align*}

The right hand side \(f\), initial conditions \(u_0\), and Dirichlet boundary values \(g\) are all specified in an input file heat_equation.prm in which these functions are provided as expressions that are parsed and evaluated at run time using the Functions::ParsedFunction<dim> class. The version of this file that is distributed with the library uses

\begin{align*} f(\mathbf x,t) &= 0, \\ u_0(\mathbf x) &= 0, \\ g(\mathbf x,t) &= \begin{cases} \cos(4\pi t) & \text{if @f$x=-1@f$}, \\ -\cos(4\pi t) & \text{if @f$x=1@f$}, \\ 0 & \text{otherwise} \end{cases} \end{align*}

but this is easily changed.

The program's input file also contains two sections that control the time stepper:

subsection Time stepper
subsection Running parameters
set final time = 5
set initial step size = 0.025
set initial time = 0
set match final time = false
set maximum number of steps = -1
set options prefix =
set solver type = beuler
end
subsection Error control
set absolute error tolerance = -1
set relative error tolerance = -1
set adaptor type = none
set ignore algebraic lte = true
set maximum step size = -1
set minimum step size = -1
end
end
std::size_t size
Definition mpi.cc:734

The first of these two sections describes things such as the end time up to which we want to run the program, the initial time step size, and the type of the time stepper (where beuler indicates "backward Euler"; other choices are listed here. We will play with some of these parameters in the results section. As usual when using PETSc solvers, these runtime configuration options can always be complemented (or overridden) via command line options.

The commented program

The program starts with the usual prerequisites, all of which you will know by now from either step-26 (the heat equation solver) or step-40 (the parallel Laplace solver):

  #include <deal.II/base/utilities.h>
  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/logstream.h>
  #include <deal.II/base/index_set.h>
  #include <deal.II/base/parsed_function.h>
  #include <deal.II/base/parameter_acceptor.h>
  #include <deal.II/base/conditional_ostream.h>
  #include <deal.II/base/timer.h>
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/petsc_vector.h>
  #include <deal.II/lac/petsc_sparse_matrix.h>
  #include <deal.II/lac/petsc_solver.h>
  #include <deal.II/lac/petsc_precondition.h>
  #include <deal.II/lac/affine_constraints.h>
  #include <deal.II/lac/sparsity_tools.h>
  #include <deal.II/distributed/tria.h>
  #include <deal.II/distributed/grid_refinement.h>
  #include <deal.II/distributed/solution_transfer.h>
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/grid_out.h>
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_tools.h>
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_values.h>
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/error_estimator.h>
  #include <deal.II/numerics/solution_transfer.h>
  #include <deal.II/numerics/matrix_tools.h>
 

The only new include file relevant here is the one that provides us with the PETScWrappers::TimerStepper class:

  #include <deal.II/lac/petsc_ts.h>
 
  #include <fstream>
  #include <iostream>
 
 
  namespace Step86
  {
  using namespace dealii;
 

The HeatEquation class

At its core, this program's principal structure can be understood quite easily if you know step-26 (for the heat equation) and step-40 (for how a parallel solver looks like). It has many of the usual member functions and member variables that for convenience of documentation we will list towards the top of the class so that we can document the remainder separately below.

We derive the main class from ParameterAcceptor to make dealing with run time parameters and reading them a parameter file easier. step-60 and step-70 have already explained how this works.

  template <int dim>
  class HeatEquation : public ParameterAcceptor
  {
  public:
  HeatEquation(const MPI_Comm mpi_communicator);
  void run();
 
  private:
  const MPI_Comm mpi_communicator;
 
  TimerOutput computing_timer;
 
  FE_Q<dim> fe;
  DoFHandler<dim> dof_handler;
 
  IndexSet locally_owned_dofs;
  IndexSet locally_relevant_dofs;
 
 
  void setup_system(const double time);
 
  void output_results(const double time,
  const unsigned int timestep_number,
  const PETScWrappers::MPI::Vector &solution);
 
Definition fe_q.h:554
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

At this point, we start to deviate from the "manual" way of solving time dependent problems. High level packages for the solution of ODEs usually expect the user to provide a function that computes the residual of the equation and the time derivative of the solution.

This allows those packages to abstract away the details of how the time derivative is defined (e.g., backward Euler, Crank-Nicolson, etc.) and allow the user to provide a uniform interface, irrespective of the time stepper used. PETSc TS and SUNDIALS are two examples of such packages, and they require the user to provide C style call-backs to compute residuals, Jacobians, etc. In deal.II, we wrap these C style libraries with our own wrappers that use a style closer to c++, and that allows us to simply define the callbacks via lambda functions. Several of these lambda functions will simply call member functions to do the actual work.

To make it clear what we are doing here, we start by defining functions with the same name of the interface that we will use later on (and documented both in the introduction of this program as well in the documentation of the PETScWrappers::TimeStepper class). These are the functions that compute the right hand side, the "Jacobian matrix" (see the introduction for how that is defined), and that can solve a linear system with the Jacobian matrix. At the bottom of the following block, we also declare the matrix object that will store this Jacobian.

Note that all of these functions receive the current solution (and, where necessary, its time derivative) as inputs. This is because the solution vector – i.e., the variable that stores the current state of the solution – is kept inside the time integrator object. This is useful: If we kept a copy of the solution vector as a member variable of the current class, one would continuously have to wonder whether it is still in sync with the version of the solution vector the time integrator object internally believes is the currently correct version of this vector. As a consequence, we do not store such a copy here: Whenever a function requires access to the current value of the solution, it receives it from the time integrator as a const argument. The same observation can be made about the variable that stores the current time, or the current length of the time step, or the number of time steps performed so far: They are all kept inside the time stepping object.

  void implicit_function(const double time,
  const PETScWrappers::MPI::Vector &solution,
  const PETScWrappers::MPI::Vector &solution_dot,
 
  void
  assemble_implicit_jacobian(const double time,
  const PETScWrappers::MPI::Vector &solution,
  const PETScWrappers::MPI::Vector &solution_dot,
  const double shift);
 
  void solve_with_jacobian(const PETScWrappers::MPI::Vector &src,
 
 
 

In this tutorial program, similar to what we did in step-26, we want to adapt the mesh at regular time intervals. However, if we are using an external time stepper, we need to make sure that the time stepper is aware of the mesh changes – see the discussion on this topic in the introduction. In particular, the time stepper must support the fact that the number of degrees of freedom may change, and it must support the fact that all internal vectors (e.g., the solution vector and all intermediate stages used to compute the current time derivative) will need to be transferred to the new mesh. For reasons that will be discussed in the implementation below, we split the mesh refinement operation into two functions: The first will mark which cells to refine, based on a given solution vector from which we can compute error indicators; the second one will do the actual mesh refinement and transfer a set of vectors from the old to the new mesh.

  void prepare_for_coarsening_and_refinement(
  const PETScWrappers::MPI::Vector &solution);
 
  void transfer_solution_vectors_to_new_mesh(
  const double time,
  const std::vector<PETScWrappers::MPI::Vector> &all_in,
  std::vector<PETScWrappers::MPI::Vector> &all_out);
 

As also discussed in the introduction, we also have to deal with "algebraic" solution components, i.e., degrees of freedom that are not really free but instead have values determined either by boundary conditions or by hanging node constraints. While the values of the boundary conditions can change from time step to time step (because the function \(g(\mathbf x,t)\) may indeed depend on time), hanging node constraints remain the same as long as the mesh remains the same. As a consequence, we will keep an AffineConstraints object that stores the hanging node constraints and that is only updated when the mesh changes, and then an AffineConstraints object current_constraints that we will initialize with the hanging node constraints and then add the constraints due to Dirichlet boundary values at a specified time. This evaluation of boundary values and combining of constraints happens in the update_current_constraints() function.

At one place in the program, we will also need an object that constrains the same degrees of freedom, but with zero values even if the boundary values for the solution are non-zero; we will keep this modified set of constraints in homogeneous_constraints.

  AffineConstraints<double> hanging_nodes_constraints;
  AffineConstraints<double> current_constraints;
  AffineConstraints<double> homogeneous_constraints;
 
  void update_current_constraints(const double time);
 

The remainder of the class is simply consumed with objects that describe either the functioning of the time stepper, when and how to do mesh refinement, and the objects that describe right hand side, initial conditions, and boundary conditions for the PDE. Since we already derive our class from ParameterAcceptor, we exploit its facilities to parse also the parameters of the functions that define the initial value, the right hand side, and the boundary values. We therefore create three ParameterAcceptorProxy objects, which wrap the actual ParsedFunction class into objects that ParameterAcceptor can handle.

  PETScWrappers::TimeStepperData time_stepper_data;
 
  unsigned int initial_global_refinement;
  unsigned int max_delta_refinement_level;
  unsigned int mesh_adaptation_frequency;
 
  right_hand_side_function;
  initial_value_function;
  boundary_values_function;
  };
 
 

The HeatEquation constructor

The constructor is responsible for initializing all member variables of the class. This is relatively straightforward, and includes setting up parameters to be set upon reading from an input file via the ParameterAcceptor mechanism previously detailed in step-60 and step-70.

  template <int dim>
  HeatEquation<dim>::HeatEquation(const MPI_Comm mpi_communicator)
  : ParameterAcceptor("/Heat Equation/")
  , mpi_communicator(mpi_communicator)
  , pcout(std::cout,
  (Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
  , computing_timer(mpi_communicator,
  pcout,
  TimerOutput::summary,
  TimerOutput::wall_times)
  , triangulation(mpi_communicator,
  typename Triangulation<dim>::MeshSmoothing(
  Triangulation<dim>::smoothing_on_refinement |
  Triangulation<dim>::smoothing_on_coarsening))
  , fe(1)
  , dof_handler(triangulation)
  , time_stepper_data("",
  "beuler",
  /* start time */ 0.0,
  /* end time */ 1.0,
  /* initial time step */ 0.025)
  , initial_global_refinement(5)
  , max_delta_refinement_level(2)
  , mesh_adaptation_frequency(0)
  , right_hand_side_function("/Heat Equation/Right hand side")
  , initial_value_function("/Heat Equation/Initial value")
  , boundary_values_function("/Heat Equation/Boundary values")
  {
  enter_subsection("Time stepper");
  {
  enter_my_subsection(this->prm);
  {
  time_stepper_data.add_parameters(this->prm);
  }
  leave_my_subsection(this->prm);
  }
  leave_subsection();
 
  add_parameter("Initial global refinement",
  initial_global_refinement,
  "Number of times the mesh is refined globally before "
  "starting the time stepping.");
  add_parameter("Maximum delta refinement level",
  max_delta_refinement_level,
  "Maximum number of local refinement levels.");
  add_parameter("Mesh adaptation frequency",
  mesh_adaptation_frequency,
  "When to adapt the mesh.");
  }
 
 
 
STL namespace.

The HeatEquation::setup_system() function

This function is not very different from what we do in many other programs (including step-26). We enumerate degrees of freedom, output some information about then, build constraint objects (recalling that we put hanging node constraints into their separate object), and then also build an AffineConstraint object that contains both the hanging node constraints as well as constraints corresponding to zero Dirichlet boundary conditions. This last object is needed since we impose the constraints through algebraic equations. While technically it would be possible to use the time derivative of the boundary function as a boundary conditions for the time derivative of the solution, this is not done here. Instead, we impose the boundary conditions through algebraic equations, and therefore the time derivative of the boundary conditions is not part of the algebraic system, and we need zero boundary conditions on the time derivative of the solution when computing the residual. We use the homogeneous_constraints object for this purpose.

Finally, we create the actual non-homogeneous current_constraints by calling `update_current_constraints). These are also used during the assembly and during the residual evaluation.

  template <int dim>
  void HeatEquation<dim>::setup_system(const double time)
  {
  TimerOutput::Scope t(computing_timer, "setup system");
 
  dof_handler.distribute_dofs(fe);
  pcout << std::endl
  << "Number of active cells: " << triangulation.n_active_cells()
  << std::endl
  << "Number of degrees of freedom: " << dof_handler.n_dofs()
  << std::endl
  << std::endl;
 
  locally_owned_dofs = dof_handler.locally_owned_dofs();
  locally_relevant_dofs =
 
 
  hanging_nodes_constraints.clear();
  hanging_nodes_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
  hanging_nodes_constraints);
  hanging_nodes_constraints.make_consistent_in_parallel(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  hanging_nodes_constraints.close();
 
 
  homogeneous_constraints.clear();
  homogeneous_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
  homogeneous_constraints.merge(hanging_nodes_constraints);
  0,
  homogeneous_constraints);
  homogeneous_constraints.make_consistent_in_parallel(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  homogeneous_constraints.close();
 
 
  update_current_constraints(time);
 
 
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
IndexSet extract_locally_relevant_dofs(const DoFHandler< dim, spacedim > &dof_handler)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

The final block of code resets and initializes the matrix object with the appropriate sparsity pattern. Recall that we do not store solution vectors in this class (the time integrator object does that internally) and so do not have to resize and initialize them either.

  DynamicSparsityPattern dsp(locally_relevant_dofs);
  dsp,
  homogeneous_constraints,
  false);
  locally_owned_dofs,
  mpi_communicator,
  locally_relevant_dofs);
 
  jacobian_matrix.reinit(locally_owned_dofs,
  locally_owned_dofs,
  dsp,
  mpi_communicator);
  }
 
 
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
void distribute_sparsity_pattern(DynamicSparsityPattern &dsp, const IndexSet &locally_owned_rows, const MPI_Comm mpi_comm, const IndexSet &locally_relevant_rows)

The HeatEquation::output_results() function

This function is called from "monitor" function that is called in turns by the time stepper in each time step. We use it to write the solution to a file, and provide graphical output through paraview or visit. We also write a pvd file, which groups all metadata about the .vtu files into a single file that can be used to load the full time dependent solution in paraview.

  template <int dim>
  void HeatEquation<dim>::output_results(const double time,
  const unsigned int timestep_number,
  {
  TimerOutput::Scope t(computing_timer, "output results");
 
  DataOut<dim> data_out;
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(y, "U");
  data_out.build_patches();
 
  data_out.set_flags(DataOutBase::VtkFlags(time, timestep_number));
 
  const std::string filename =
  "solution-" + Utilities::int_to_string(timestep_number, 3) + ".vtu";
  data_out.write_vtu_in_parallel(filename, mpi_communicator);
 
  if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
  {
  static std::vector<std::pair<double, std::string>> times_and_names;
  times_and_names.emplace_back(time, filename);
 
  std::ofstream pvd_output("solution.pvd");
  DataOutBase::write_pvd_record(pvd_output, times_and_names);
  }
  }
 
 
 
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
void write_pvd_record(std::ostream &out, const std::vector< std::pair< double, std::string > > &times_and_names)
unsigned int this_mpi_process(const MPI_Comm mpi_communicator)
Definition mpi.cc:107
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition utilities.cc:470

The HeatEquation::implicit_function() function

As discussed in the introduction, we describe the ODE system to the time stepper via its residual,

\[ R(t,U,\dot U) = M \frac{\partial U(t)}{\partial t} + AU(t) - F(t). \]

The following function computes it, given vectors for \(U,\dot U\) that we will denote by y and y_dot because that's how they are called in the documentation of the PETScWrappers::TimeStepper class.

At the top of the function, we do the usual set up when computing integrals. We face two minor difficulties here: the first is that the y and y_dot vectors we get as input are read only, but we need to make sure they satisfy the correct boundary conditions and so have to set elements in these vectors. The second is that we need to compute the residual, and therefore in general we need to evaluate solution values and gradients inside locally owned cells, and for this need access to degrees of freedom which may be owned by neighboring processors. To address these issues, we create (non-ghosted) writable copies of the input vectors, apply boundary conditions and hanging node current_constraints; and then copy these vectors to ghosted vectors before we can do anything sensible with them.

  template <int dim>
  void
  HeatEquation<dim>::implicit_function(const double time,
  {
  TimerOutput::Scope t(computing_timer, "implicit function");
 
  PETScWrappers::MPI::Vector tmp_solution(locally_owned_dofs,
  mpi_communicator);
  PETScWrappers::MPI::Vector tmp_solution_dot(locally_owned_dofs,
  mpi_communicator);
  tmp_solution = y;
  tmp_solution_dot = y_dot;
 
  update_current_constraints(time);
  current_constraints.distribute(tmp_solution);
  homogeneous_constraints.distribute(tmp_solution_dot);
 
  PETScWrappers::MPI::Vector locally_relevant_solution(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  PETScWrappers::MPI::Vector locally_relevant_solution_dot(
  locally_owned_dofs, locally_relevant_dofs, mpi_communicator);
  locally_relevant_solution = tmp_solution;
  locally_relevant_solution_dot = tmp_solution_dot;
 
 
  const QGauss<dim> quadrature_formula(fe.degree + 1);
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
  const unsigned int n_q_points = quadrature_formula.size();
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
  std::vector<Tensor<1, dim>> solution_gradients(n_q_points);
  std::vector<double> solution_dot_values(n_q_points);
 
  Vector<double> cell_residual(dofs_per_cell);
 
  right_hand_side_function.set_time(time);
 
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

Now for computing the actual residual. Recall that we wan to compute the vector

\[ R(t,U,\dot U) = M \frac{\partial U(t)}{\partial t} + AU(t) - F(t). \]

We could do that by actually forming the matrices \(M\) and \(A\), but this is not efficient. Instead, recall (by writing out how the elements of \(M\) and \(A\) are defined, and exchanging integrals and sums) that the \(i\)th element of the residual vector is given by

\begin{align*} R(t,U,\dot U)_i &= \sum_j \int_\Omega \varphi_i(\mathbf x, t) \varphi_j(\mathbf x, t) {\partial U_j(t)}{\partial t} + \sum_j \int_\Omega \nabla \varphi_i(\mathbf x, t) \cdot \nabla \varphi_j(\mathbf x, t) U_j(t) - \int_\Omega \varphi_i f(\mathbf x, t) \\ &= \int_\Omega \varphi_i(\mathbf x, t) u_h(\mathbf x, t) + \int_\Omega \nabla \varphi_i(\mathbf x, t) \cdot \nabla u_h(\mathbf x, t) - \int_\Omega \varphi_i f(\mathbf x, t). \end{align*}

We can compute these integrals efficiently by breaking them up into a sum over all cells and then applying quadrature. For the integrand, we need to evaluate the solution and its gradient at the quadrature points within each locally owned cell, and for this, we need also access to degrees of freedom that may be owned by neighboring processors. We therefore use the locally_relevant_solution and and locally_relevant_solution_dot vectors.

  residual = 0;
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (cell->is_locally_owned())
  {
  fe_values.reinit(cell);
 
  fe_values.get_function_gradients(locally_relevant_solution,
  solution_gradients);
  fe_values.get_function_values(locally_relevant_solution_dot,
  solution_dot_values);
 
  cell->get_dof_indices(local_dof_indices);
 
  cell_residual = 0;
  for (const unsigned int q : fe_values.quadrature_point_indices())
  for (const unsigned int i : fe_values.dof_indices())
  {
  cell_residual(i) +=
  (fe_values.shape_value(i, q) * // [phi_i(x_q) *
  solution_dot_values[q] // u(x_q)
  + // +
  fe_values.shape_grad(i, q) * // grad phi_i(x_q) *
  solution_gradients[q] // grad u(x_q)
  - // -
  fe_values.shape_value(i, q) * // phi_i(x_q) *
  right_hand_side_function.value(
  fe_values.quadrature_point(q))) // f(x_q)]
  * fe_values.JxW(q); // * dx
  }
  current_constraints.distribute_local_to_global(cell_residual,
  local_dof_indices,
  residual);
  }
  residual.compress(VectorOperation::add);
 

The end result of the operations above is a vector that contains the residual vector, having taken into account the constraints due to hanging nodes and Dirichlet boundary conditions (by virtue of having used current_constraints.distribute_local_to_global() to add the local contributions to the global vector. At the end of the day, the residual vector \(r\) will be used in the solution of linear systems of the form \(J z = r\) with the "Jacobian" matrix that we define below. We want to achieve that for algebraic components, the algebraic components of \(z\) have predictable values that achieve the purposes discussed in the following. We do this by ensuring that the entries corresponding to algebraic components in the residual \(r\) have specific values, and then we will do the same in the next function for the matrix; for this, you will have to know that the rows and columns of the matrix corresponding to constrained entries are zero with the exception of the diagonal entries. We will manually set that diagonal entry to one, and so \(z_i=r_i\) for algebraic components.

From the point of view of the residual vector, if the input y vector does not contain the correct values on constrained degrees of freedom (hanging nodes or boundary conditions), we need to communicate this to the time stepper, and we do so by setting the residual to the actual difference between the input y vector and the our local copy of it, in which we have applied the constraints (see the top of the function where we called current_constraints.distribute(tmp_solution) and a similar operation on the time derivative). Since we have made a copy of the input vector for this purpose, we use it to compute the residual value. However, there is a difference between hanging nodes constraints and boundary conditions: we do not want to make hanging node constraints actually depend on their dependent degrees of freedom, since this would imply that we are actually solving for the dependent degrees of freedom. This is not what we are actually doing, however, since hanging nodes are not actually solved for. They are eliminated from the system by the call to AffineConstraints::distribute_local_to_global() above. From the point of view of the Jacobian matrix, we are effectively setting hanging nodes to an artificial value (usually zero), and we simply want to make sure that we solve for those degrees of freedom a posteriori, by calling the function AffineConstraints::distribute().

Here we therefore check that the residual is equal to the input value on the constrained dofs corresponding to hanging nodes (i.e., those for which the lines of the current_constraints contain at least one other entry), and to the difference between the input vector and the actual solution on those constraints that correspond to boundary conditions.

  for (const auto &c : current_constraints.get_lines())
  if (locally_owned_dofs.is_element(c.index))
  {
  if (c.entries.empty()) /* no dependencies -> a Dirichlet node */
  residual[c.index] = y[c.index] - tmp_solution[c.index];
  else /* has dependencies -> a hanging node */
  residual[c.index] = y[c.index];
  }
  residual.compress(VectorOperation::insert);
  }
 
 

The HeatEquation::assemble_implicit_jacobian() function

The next operation is to compute the "Jacobian", which PETSc TS defines as the matrix

\[ J_\alpha = \dfrac{\partial R}{\partial y} + \alpha \dfrac{\partial R}{\partial \dot y} \]

which, for the current linear problem, is simply

\[ J_\alpha = A + \alpha M \]

and which is in particular independent of time and the current solution vectors \(y\) and \(\dot y\).

Having seen the assembly of matrices before, there is little that should surprise you in the actual assembly here:

  template <int dim>
  void HeatEquation<dim>::assemble_implicit_jacobian(
  const double /* time */,
  const PETScWrappers::MPI::Vector & /* y */,
  const PETScWrappers::MPI::Vector & /* y_dot */,
  const double alpha)
  {
  TimerOutput::Scope t(computing_timer, "assemble implicit Jacobian");
 
  const QGauss<dim> quadrature_formula(fe.degree + 1);
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
 
  jacobian_matrix = 0;
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (cell->is_locally_owned())
  {
  fe_values.reinit(cell);
 
  cell->get_dof_indices(local_dof_indices);
 
  cell_matrix = 0;
  for (const unsigned int q : fe_values.quadrature_point_indices())
  for (const unsigned int i : fe_values.dof_indices())
  for (const unsigned int j : fe_values.dof_indices())
  {
  cell_matrix(i, j) +=
  (fe_values.shape_grad(i, q) * // grad phi_i(x_q) *
  fe_values.shape_grad(j, q) // grad phi_j(x_q)
  + alpha *
  fe_values.shape_value(i, q) * // phi_i(x_q) *
  fe_values.shape_value(j, q) // phi_j(x_q)
  ) *
  fe_values.JxW(q); // * dx
  }
  current_constraints.distribute_local_to_global(cell_matrix,
  local_dof_indices,
  jacobian_matrix);
  }
  jacobian_matrix.compress(VectorOperation::add);
 

The only interesting part is the following. Recall that we modified the residual vector's entries corresponding to the algebraic components of the solution in the previous function. The outcome of calling current_constraints.distribute_local_to_global() a few lines above is that the global matrix has zero rows and columns for the algebraic (constrained) components of the solution; the function puts a value on the diagonal that is nonzero and has about the same size as the remaining diagonal entries of the matrix. What this diagonal value is is unknown to us – in other cases where we call current_constraints.distribute_local_to_global() on both the left side matrix and the right side vector, as in most other tutorial programs, the matrix diagonal entries and right hand side values are chosen in such a way that the result of solving a linear system is what we want it to be, but the scaling is done automatically.

This is not good enough for us here, because we are building the right hand side independently from the matrix in different functions. Thus, for any constrained degree of freedom, we set the diagonal of the Jacobian to be one. This leaves the Jacobian matrix invertible, consistent with what the time stepper expects, and it also makes sure that if we did not make a mistake in the residual and/or in the Jacbian matrix, then asking the time stepper to check the Jacobian with a finite difference method will produce the correct result. This can be activated at run time via passing the -snes_test_jacobian option on the command line.

  for (const auto &c : current_constraints.get_lines())
  jacobian_matrix.set(c.index, c.index, 1.0);
  jacobian_matrix.compress(VectorOperation::insert);
  }
 
 

The HeatEquation::solve_with_jacobian() function

This is the function that actually solves the linear system with the Jacobian matrix we have previously built (in a call to the previous function during the current time step or another earlier one – time steppers are quite sophisticated in determining internally whether it is necessary to update the Jacobian matrix, or whether one can reuse it for another time step without rebuilding it; this is similar to how one can re-use the Newton matrix for several Newton steps, see for example the discussion in step-77). We could in principle not provide this function to the time stepper, and instead select a specific solver on the command line by using the -ksp_* options of PETSc. However, by providing this function, we can use a specific solver and preconditioner for the linear system, and still have the possibility to change them on the command line.

Providing a specific solver is more in line with the way we usually do things in other deal.II examples, while letting PETSc choose a generic solver, and changing it on the command line via -ksp_type is more in line with the way PETSc is usually used, and it can be a convenient approach when we are experimenting to find an optimal solver for our problem. Both options are available here, since we can still change both the solver and the preconditioner on the command line via -user_ksp_type and -user_pc_type options.

In any case, recall that the Jacobian we built in the previous function is always of the form

\[ J_\alpha = \alpha M + A \]

where \(M\) is a mass matrix and \(A\) a Laplace matrix. \(M\) is symmetric and positive definite; \(A\) is symmetric and at least positive semidefinite; \(\alpha> 0\). As a consequence, the Jacobian matrix is a symmetric and positive definite matrix, which we can efficiently solve with the Conjugate Gradient method, along with either SSOR or (if available) the algebraic multigrid implementation provided by PETSc (via the Hypre package) as preconditioner. In practice, if you wanted to solve "real" problems, one would spend some time finding which preconditioner is optimal, perhaps using PETSc's ability to read solver and preconditioner choices from the command line. But this is not the focus of this tutorial program, and so we just go with the following:

  template <int dim>
  void
  HeatEquation<dim>::solve_with_jacobian(const PETScWrappers::MPI::Vector &src,
  {
  TimerOutput::Scope t(computing_timer, "solve with Jacobian");
 
  #if defined(PETSC_HAVE_HYPRE)
  preconditioner.initialize(jacobian_matrix);
  #else
  preconditioner.initialize(
  #endif
 
  SolverControl solver_control(1000, 1e-8 * src.l2_norm());
  PETScWrappers::SolverCG cg(solver_control);
  cg.set_prefix("user_");
 
  cg.solve(jacobian_matrix, dst, src, preconditioner);
 
  pcout << " " << solver_control.last_step() << " linear iterations."
  << std::endl;
  }
 
 
void initialize(const MatrixBase &matrix, const AdditionalData &additional_data=AdditionalData())
void initialize(const MatrixBase &matrix, const AdditionalData &additional_data=AdditionalData())

The HeatEquation::prepare_for_coarsening_and_refinement() function

The next block of functions deals with mesh refinement. We split this process up into a "decide whether and what you want to refine" and a "please transfer these vectors from old to new mesh" phase, where the first also deals with marking cells for refinement. (The decision whether or not to refine is done in the lambda function that calls the current function.)

Breaking things into a "mark cells" function and into a "execute mesh adaptation and transfer solution vectors" function is awkward, though conceptually not difficult to understand. These two pieces of code should really be part of the same function, as they are in step-26. The issue is with what PETScWrappers::TimeStepper provides us with in these callbacks. Specifically, the "decide whether and what you want to refine" callback has access to the current solution, and so can evaluate (spatial) error estimators to decide which cells to refine. The second callback that transfers vectors from old to new mesh gets a bunch of vectors, but without the semantic information on which of these is the current solution vector. As a consequence, it cannot do the marking of cells for refinement or coarsening, and we have to do that from the first callback.

In practice, however, the problem is minor. The first of these two functions is essentially identical to the first half of the corresponding function in step-26, with the only difference that it uses the parallel::distributed::GridRefinement namespace function instead of the serial one. Once again, we make sure that we never fall below the minimum refinement level, and above the maximum one, that we can select from the parameter file.

  template <int dim>
  void HeatEquation<dim>::prepare_for_coarsening_and_refinement(
  {
  PETScWrappers::MPI::Vector locally_relevant_solution(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  locally_relevant_solution = y;
 
  Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
  QGauss<dim - 1>(fe.degree + 1),
  {},
  locally_relevant_solution,
  estimated_error_per_cell);
 
  triangulation, estimated_error_per_cell, 0.6, 0.4);
 
  const unsigned int max_grid_level =
  initial_global_refinement + max_delta_refinement_level;
  const unsigned int min_grid_level = initial_global_refinement;
 
  if (triangulation.n_levels() > max_grid_level)
  for (const auto &cell :
  triangulation.active_cell_iterators_on_level(max_grid_level))
  cell->clear_refine_flag();
  for (const auto &cell :
  triangulation.active_cell_iterators_on_level(min_grid_level))
  cell->clear_coarsen_flag();
  }
 
 
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, Number > * > &neumann_bc, const ReadVector< Number > &solution, Vector< float > &error, const ComponentMask &component_mask={}, const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void refine_and_coarsen_fixed_fraction(::Triangulation< dim, spacedim > &tria, const ::Vector< Number > &criteria, const double top_fraction_of_error, const double bottom_fraction_of_error, const VectorTools::NormType norm_type=VectorTools::L1_norm)

The HeatEquation::transfer_solution_vectors_to_new_mesh() function

The following function then is the second half of the correspond function in step-26. It is called by the time stepper whenever it requires to transfer the solution and any intermediate stage vectors to a new mesh. We must make sure that all input vectors are transformed into ghosted vectors before the actual transfer is executed, and that we distribute the hanging node constraints on the output vectors as soon as we have interpolated the vectors to the new mesh – i.e., that all constraints are satisfied on the vectors we transfer.

We have no way to enforce boundary conditions at this stage, however. This is because the various vectors may correspond to solutions at previous time steps if the method used here is a multistep time integrator, and so may correspond to different time points that we are not privy to.

While this could be a problem if we used the values of the solution and of the intermediate stages on the constrained degrees of freedom to compute the errors, we do not do so. Instead, we compute the errors on the differential equation, and ignore algebraic constraints; therefore we do no need to guarantee that the boundary conditions are satisfied also in the intermediate stages.

We have at our disposal the hanging node current_constraints alone, though, and therefore we enforce them on the output vectors, even if this is not really needed.

  template <int dim>
  void HeatEquation<dim>::transfer_solution_vectors_to_new_mesh(
  const double time,
  const std::vector<PETScWrappers::MPI::Vector> &all_in,
  std::vector<PETScWrappers::MPI::Vector> &all_out)
  {
  solution_trans(dof_handler);
 
  std::vector<PETScWrappers::MPI::Vector> all_in_ghosted(all_in.size());
  std::vector<const PETScWrappers::MPI::Vector *> all_in_ghosted_ptr(
  all_in.size());
  std::vector<PETScWrappers::MPI::Vector *> all_out_ptr(all_in.size());
  for (unsigned int i = 0; i < all_in.size(); ++i)
  {
  all_in_ghosted[i].reinit(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  all_in_ghosted[i] = all_in[i];
  all_in_ghosted_ptr[i] = &all_in_ghosted[i];
  }
 
  triangulation.prepare_coarsening_and_refinement();
  solution_trans.prepare_for_coarsening_and_refinement(all_in_ghosted_ptr);
  triangulation.execute_coarsening_and_refinement();
 
  setup_system(time);
 
  all_out.resize(all_in.size());
  for (unsigned int i = 0; i < all_in.size(); ++i)
  {
  all_out[i].reinit(locally_owned_dofs, mpi_communicator);
  all_out_ptr[i] = &all_out[i];
  }
  solution_trans.interpolate(all_out_ptr);
 
  for (PETScWrappers::MPI::Vector &v : all_out)
  hanging_nodes_constraints.distribute(v);
  }
 
 
::SolutionTransfer< dim, VectorType, spacedim > SolutionTransfer

The HeatEquation::update_current_constraints() function

Since regenerating the constraints at each time step may be expensive, we make sure that we only do so when the time changes. We track time change by checking if the time of the boundary_values_function has changed, with respect to the time of the last call to this function. This will work most of the times, but not the very first time we call this function, since the time then may be zero and the time of the boundary_values_function is zero at construction time. We therefore also check if the number constraints in current_constraints, and if these are empty, we regenerate the constraints regardless of the time variable.

  template <int dim>
  void HeatEquation<dim>::update_current_constraints(const double time)
  {
  if (current_constraints.n_constraints() == 0 ||
  time != boundary_values_function.get_time())
  {
  TimerOutput::Scope t(computing_timer, "update current constraints");
 
  boundary_values_function.set_time(time);
  current_constraints.clear();
  current_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
  current_constraints.merge(hanging_nodes_constraints);
  0,
  boundary_values_function,
  current_constraints);
  current_constraints.make_consistent_in_parallel(locally_owned_dofs,
  locally_relevant_dofs,
  mpi_communicator);
  current_constraints.close();
  }
  }
 
 

The HeatEquation::run() function

We have finally arrived at the main function of the class. At the top, it creates the mesh and sets up the variables that make up the linear system:

  template <int dim>
  void HeatEquation<dim>::run()
  {
  triangulation.refine_global(initial_global_refinement);
 
  setup_system(/* time */ 0);
 
void hyper_L(Triangulation< dim > &tria, const double left=-1., const double right=1., const bool colorize=false)

We then set up the time stepping object and associate the matrix we will build whenever requested for both the Jacobian matrix (see the definition above of what the "Jacobian" actually refers to) and for the matrix that will be used as a preconditioner for the Jacobian.

  petsc_ts(time_stepper_data);
 
  petsc_ts.set_matrices(jacobian_matrix, jacobian_matrix);
 
 

The real work setting up the time stepping object starts here. As discussed in the introduction, the way the PETScWrappers::TimeStepper class is used is by inverting control: At the end of this function, we will call PETScWrappers::TimeStepper::solve() which internally will run the loop over time steps, and at the appropriate places call back into user code for whatever functionality is required. What we need to do is to hook up these callback functions by assigning appropriate lambda functions to member variables of the petsc_ts object.

We start by creating lambda functions that provide information about the "implicit function" (i.e., that part of the right hand side of the ODE system that we want to treat implicitly – which in our case is the entire right hand side), a function that assembles the Jacobian matrix, and a function that solves a linear system with the Jacobian.

  petsc_ts.implicit_function = [&](const double time,
  this->implicit_function(time, y, y_dot, res);
  };
 
  petsc_ts.setup_jacobian = [&](const double time,
  const double alpha) {
  this->assemble_implicit_jacobian(time, y, y_dot, alpha);
  };
 
  petsc_ts.solve_with_jacobian = [&](const PETScWrappers::MPI::Vector &src,
  this->solve_with_jacobian(src, dst);
  };
 

The next two callbacks deal with identifying and setting variables that are considered "algebraic" (rather than "differential"), i.e., for which we know what values they are supposed to have rather than letting their values be determined by the differential equation. We need to instruct the time stepper to ignore these components when computing the error in the residuals, and we do so by first providing a function that returns an IndexSet with the indices of these algebraic components of the solution vector (or rather, that subset of the locally-owned part of the vector that is algebraic, in case we are running in parallel). This first of the following two functions does that. Specifically, both nodes at which Dirichlet boundary conditions are applied, and hanging nodes are algebraically constrained. This function then returns a set of indices that is initially empty (but knows about the size of the index space) and which we then construct as the union of boundary and hanging node indices.

Following this, we then also need a function that, given a solution vector y and the current time, sets the algebraic components of that vector to their correct value. This comes down to ensuring that we have up to date constraints in the constraints variable, and then applying these constraints to the solution vector via AffineConstraints::distribute(). (It is perhaps worth noting that we could have achieved the same in solve_with_jacobian(). Whenever the time stepper solves a linear system, it follows up the call to the solver by calling the callback to set algebraic components correct. We could also have put the calls to update_current_constraints() and distribute() into the solve_with_jacobian function, but by not doing so, we can also replace the solve_with_jacobian function with a call to a PETSc solver, and we would still have the current_constraints correctly applied to the solution vector.)

  petsc_ts.algebraic_components = [&]() {
  IndexSet algebraic_set(dof_handler.n_dofs());
  algebraic_set.add_indices(DoFTools::extract_boundary_dofs(dof_handler));
  algebraic_set.add_indices(
  return algebraic_set;
  };
 
  petsc_ts.update_constrained_components =
  [&](const double time, PETScWrappers::MPI::Vector &y) {
  TimerOutput::Scope t(computing_timer, "set algebraic components");
  update_current_constraints(time);
  current_constraints.distribute(y);
  };
 
 
IndexSet extract_boundary_dofs(const DoFHandler< dim, spacedim > &dof_handler, const ComponentMask &component_mask={}, const std::set< types::boundary_id > &boundary_ids={})
Definition dof_tools.cc:616
IndexSet extract_hanging_node_dofs(const DoFHandler< dim, spacedim > &dof_handler)

The next two callbacks relate to mesh refinement. As discussed in the introduction, PETScWrappers::TimeStepper knows how to deal with the situation where we want to change the mesh. All we have to provide is a callback that returns true if we are at a point where we want to refine the mesh (and false otherwise) and that if we want to do mesh refinement does some prep work for that in the form of calling the prepare_for_coarsening_and_refinement function.

If the first callback below returns true, then PETSc TS will do some clean-up operations, and call the second of the callback functions (petsc_ts.transfer_solution_vectors_to_new_mesh) with a collection of vectors that need to be interpolated from the old to the new mesh. This may include the current solution, perhaps the current time derivative of the solution, and in the case of multistep time integrators also the solutions of some previous time steps. We hand all of these over to the interpolate() member function of this class.

  petsc_ts.decide_and_prepare_for_remeshing =
  [&](const double /* time */,
  const unsigned int step_number,
  const PETScWrappers::MPI::Vector &y) -> bool {
  if (step_number > 0 && this->mesh_adaptation_frequency > 0 &&
  step_number % this->mesh_adaptation_frequency == 0)
  {
  pcout << std::endl << "Adapting the mesh..." << std::endl;
  this->prepare_for_coarsening_and_refinement(y);
  return true;
  }
  else
  return false;
  };
 
  petsc_ts.transfer_solution_vectors_to_new_mesh =
  [&](const double time,
  const std::vector<PETScWrappers::MPI::Vector> &all_in,
  std::vector<PETScWrappers::MPI::Vector> &all_out) {
  this->transfer_solution_vectors_to_new_mesh(time, all_in, all_out);
  };
 

The final callback is a "monitor" that is called in each time step. Here we use it to create a graphical output. Perhaps a better scheme would output the solution at fixed time intervals, rather than in every time step, but this is not the main point of this program and so we go with the easy approach:

  petsc_ts.monitor = [&](const double time,
  const unsigned int step_number) {
  pcout << "Time step " << step_number << " at t=" << time << std::endl;
  this->output_results(time, step_number, y);
  };
 
 

With all of this out of the way, the rest of the function is anticlimactic: We just have to initialize the solution vector with the initial conditions and call the function that does the time stepping, and everything else will happen automatically:

  PETScWrappers::MPI::Vector solution(locally_owned_dofs, mpi_communicator);
  VectorTools::interpolate(dof_handler, initial_value_function, solution);
 
  petsc_ts.solve(solution);
  }
  } // namespace Step86
 
 
void interpolate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Function< spacedim, typename VectorType::value_type > &function, VectorType &vec, const ComponentMask &component_mask={}, const unsigned int level=numbers::invalid_unsigned_int)

The main() function

The rest of the program is as it always looks. We read run-time parameters from an input file via the ParameterAcceptor class in the same way as we showed in step-60 and step-70.

  int main(int argc, char **argv)
  {
  try
  {
  using namespace Step86;
 
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
  HeatEquation<2> heat_equation_solver(MPI_COMM_WORLD);
 
  const std::string input_filename =
  (argc > 1 ? argv[1] : "heat_equation.prm");
  ParameterAcceptor::initialize(input_filename, "heat_equation_used.prm");
  heat_equation_solver.run();
  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl
  << exc.what() << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
 
  return 0;
  }
static void initialize(const std::string &filename="", const std::string &output_filename="", const ParameterHandler::OutputStyle output_style_for_output_filename=ParameterHandler::Short, ParameterHandler &prm=ParameterAcceptor::prm, const ParameterHandler::OutputStyle output_style_for_filename=ParameterHandler::DefaultStyle)

Results

When you run this program with the input file as is, you get output as follows:

Number of active cells: 768
Number of degrees of freedom: 833
Time step 0 at t=0
5 linear iterations.
8 linear iterations.
Time step 1 at t=0.025
6 linear iterations.
Time step 2 at t=0.05
5 linear iterations.
8 linear iterations.
Time step 3 at t=0.075
6 linear iterations.
Time step 4 at t=0.1
6 linear iterations.
Time step 5 at t=0.125
6 linear iterations.
Time step 6 at t=0.15
6 linear iterations.
Time step 7 at t=0.175
6 linear iterations.
Time step 8 at t=0.2
6 linear iterations.
Time step 9 at t=0.225
6 linear iterations.
Adapting the mesh...
Number of active cells: 1050
Number of degrees of freedom: 1155
Time step 10 at t=0.25
5 linear iterations.
8 linear iterations.
Time step 11 at t=0.275
5 linear iterations.
7 linear iterations.
[...]
Time step 195 at t=4.875
6 linear iterations.
Time step 196 at t=4.9
6 linear iterations.
Time step 197 at t=4.925
6 linear iterations.
Time step 198 at t=4.95
6 linear iterations.
Time step 199 at t=4.975
5 linear iterations.
Adapting the mesh...
Number of active cells: 1380
Number of degrees of freedom: 1547
Time step 200 at t=5
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start | 43.2s | |
| | | |
| Section | no. calls | wall time | % of total |
+---------------------------------+-----------+------------+------------+
| assemble implicit Jacobian | 226 | 9.93s | 23% |
| implicit function | 426 | 16.2s | 37% |
| output results | 201 | 9.74s | 23% |
| set algebraic components | 200 | 0.0496s | 0.11% |
| setup system | 21 | 0.799s | 1.8% |
| solve with Jacobian | 226 | 0.56s | 1.3% |
| update current constraints | 201 | 1.53s | 3.5% |
+---------------------------------+-----------+------------+------------+

We can generate output for this in the form of a video:

The solution here is driven by boundary values (the initial conditions are zero, and so is the right hand side of the equation). It takes a little bit of time for the boundary values to diffuse into the domain, and so the temperature (the solution of the heat equation) in the interior of the domain has a slight lag compared to the temperature at the boundary.

The more interesting component of this program is how easy it is to play with the details of the time stepping algorithm. Recall that the solution above is controlled by the following parameters:

subsection Time stepper
subsection Running parameters
set final time = 5
set initial step size = 0.025
set initial time = 0
set match final time = false
set maximum number of steps = -1
set options prefix =
set solver type = beuler
end
subsection Error control
set absolute error tolerance = -1
set adaptor type = none
set ignore algebraic lte = true
set maximum step size = -1
set minimum step size = -1
set relative error tolerance = -1
end
end

Of particular interest for us here is to set the time stepping algorithm and the adaptive time step control method. The latter is set to "none" above, but there are several alternative choices for this parameter. For example, we can set parameters as follows,

subsection Time stepper
subsection Running parameters
set final time = 5
set initial step size = 0.025
set initial time = 0
set match final time = false
set maximum number of steps = -1
set options prefix =
set solver type = bdf
end
subsection Error control
set absolute error tolerance = 1e-2
set relative error tolerance = 1e-2
set adaptor type = basic
set ignore algebraic lte = true
set maximum step size = 1
set minimum step size = 0.01
end
end

What we do here is set the initial time step size to 0.025, and choose relatively large absolute and relative error tolerances of 0.01 for the time step size adaptation algorithm (for which we choose "basic"). We ask PETSc TS to use a Backward Differentiation Formula (BDF) method, and we get the following as output:

===========================================
Number of active cells: 768
Number of degrees of freedom: 833
Time step 0 at t=0
5 linear iterations.
5 linear iterations.
5 linear iterations.
4 linear iterations.
6 linear iterations.
Time step 1 at t=0.01
5 linear iterations.
5 linear iterations.
Time step 2 at t=0.02
5 linear iterations.
5 linear iterations.
Time step 3 at t=0.03
5 linear iterations.
5 linear iterations.
Time step 4 at t=0.042574
5 linear iterations.
5 linear iterations.
Time step 5 at t=0.0588392
5 linear iterations.
5 linear iterations.
Time step 6 at t=0.0783573
5 linear iterations.
7 linear iterations.
5 linear iterations.
7 linear iterations.
Time step 7 at t=0.100456
5 linear iterations.
7 linear iterations.
5 linear iterations.
7 linear iterations.
Time step 8 at t=0.124982
5 linear iterations.
8 linear iterations.
5 linear iterations.
7 linear iterations.
Time step 9 at t=0.153156
6 linear iterations.
5 linear iterations.
[...]
Time step 206 at t=4.96911
5 linear iterations.
5 linear iterations.
Time step 207 at t=4.99398
5 linear iterations.
5 linear iterations.
Time step 208 at t=5.01723
+---------------------------------------------+------------+------------+
| Total wallclock time elapsed since start | 117s | |
| | | |
| Section | no. calls | wall time | % of total |
+---------------------------------+-----------+------------+------------+
| assemble implicit Jacobian | 593 | 35.6s | 31% |
| implicit function | 1101 | 56.3s | 48% |
| output results | 209 | 12.5s | 11% |
| set algebraic components | 508 | 0.156s | 0.13% |
| setup system | 21 | 1.11s | 0.95% |
| solve with Jacobian | 593 | 1.97s | 1.7% |
| update current constraints | 509 | 4.53s | 3.9% |
+---------------------------------+-----------+------------+------------+

What is happening here is that apparently PETSc TS is not happy with our choice of initial time step size, and after several linear solves has reduced it to the minimum we allow it to, 0.01. The following time steps then run at a time step size of 0.01 until it decides to make it slightly larger again and (apparently) switches to a higher order method that requires more linear solves per time step but allows for a larger time step closer to our initial choice 0.025 again. It does not quite hit the final time of \(T=5\) with its time step choices, but we've got only ourselves to blame for that by setting

set match final time = false

in the input file. If hitting the end time exactly is important to us, setting the flag to true resolves this issue.

We can even reason why PETSc eventually chooses a time step of around 0.025: The boundary values undergo a complete cosine cycle within 0.5 time units; we should expect that it takes around ten or twenty time steps to resolve each period of a cycle to reasonable accuracy, and this leads to the time step choice PETSc finds.

Not all combinations of methods, time step adaptation algorithms, and other parameters are valid, but the main messages from the experiment above that you should take away are:

  • It would undoubtedly be quite time consuming to implement many of the methods that PETSc TS offers for time stepping – but with a program such as the one here, we don't need to: We can just select from the many methods PETSc TS already has.
  • Adaptive time step control is difficult; adaptive choice of which method or order to choose is perhaps even more difficult. None of the time dependent programs that came before the current one (say, step-23, step-26, step-31, step-58, and a good number of others) have either. Moreover, while deal.II is good at spatial adaptation of meshes, it is not a library written by experts in time step adaptation, and so will likely not gain this ability either. But, again, it doesn't have to: We can rely on a library written by experts in that area.

Possibilities for extensions

The program actually runs in parallel, even though we have not used that above. Specifically, if you have configured deal.II to use MPI, then you can do mpirun -np 8 ./step-86 heat_equation.prm to run the program with 8 processes.

For the program as currently written (and in debug mode), this makes little difference: It will run about twice as fast, but take about 8 times as much CPU time. That is because the problem is just so small: Generally between 1000 and 2000 degrees of freedom. A good rule of thumb is that programs really only benefit from parallel computing if you have somewhere in the range of 50,000 to 100,000 unknowns per MPI process. But it is not difficult to adapt the program at hand here to run with a much finer mesh, or perhaps in 3d, so that one is beyond that limit and sees the benefits of parallel computing.

The plain program

/* ------------------------------------------------------------------------
*
* SPDX-License-Identifier: LGPL-2.1-or-later
* Copyright (C) 2000 - 2024 by the deal.II authors
*
* This file is part of the deal.II library.
*
* Part of the source code is dual licensed under Apache-2.0 WITH
* LLVM-exception OR LGPL-2.1-or-later. Detailed license information
* governing the source code and code contributions can be found in
* LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
*
* ------------------------------------------------------------------------
*
* Authors:
* Wolfgang Bangerth, Colorado State University, 2024
* Stefano Zampini, King Abdullah University of Science and Technology, 2024
*/
#include <fstream>
#include <iostream>
namespace Step86
{
using namespace dealii;
template <int dim>
class HeatEquation : public ParameterAcceptor
{
public:
HeatEquation(const MPI_Comm mpi_communicator);
void run();
private:
const MPI_Comm mpi_communicator;
TimerOutput computing_timer;
DoFHandler<dim> dof_handler;
IndexSet locally_owned_dofs;
IndexSet locally_relevant_dofs;
void setup_system(const double time);
void output_results(const double time,
const unsigned int timestep_number,
const PETScWrappers::MPI::Vector &solution);
void implicit_function(const double time,
const PETScWrappers::MPI::Vector &solution,
const PETScWrappers::MPI::Vector &solution_dot,
void
assemble_implicit_jacobian(const double time,
const PETScWrappers::MPI::Vector &solution,
const PETScWrappers::MPI::Vector &solution_dot,
const double shift);
void solve_with_jacobian(const PETScWrappers::MPI::Vector &src,
void prepare_for_coarsening_and_refinement(
const PETScWrappers::MPI::Vector &solution);
void transfer_solution_vectors_to_new_mesh(
const double time,
const std::vector<PETScWrappers::MPI::Vector> &all_in,
std::vector<PETScWrappers::MPI::Vector> &all_out);
AffineConstraints<double> hanging_nodes_constraints;
AffineConstraints<double> current_constraints;
AffineConstraints<double> homogeneous_constraints;
void update_current_constraints(const double time);
PETScWrappers::TimeStepperData time_stepper_data;
unsigned int initial_global_refinement;
unsigned int max_delta_refinement_level;
unsigned int mesh_adaptation_frequency;
right_hand_side_function;
initial_value_function;
boundary_values_function;
};
template <int dim>
HeatEquation<dim>::HeatEquation(const MPI_Comm mpi_communicator)
: ParameterAcceptor("/Heat Equation/")
, mpi_communicator(mpi_communicator)
, pcout(std::cout,
(Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
, computing_timer(mpi_communicator,
pcout,
TimerOutput::summary,
TimerOutput::wall_times)
, triangulation(mpi_communicator,
typename Triangulation<dim>::MeshSmoothing(
Triangulation<dim>::smoothing_on_refinement |
Triangulation<dim>::smoothing_on_coarsening))
, fe(1)
, dof_handler(triangulation)
, time_stepper_data("",
"beuler",
/* start time */ 0.0,
/* end time */ 1.0,
/* initial time step */ 0.025)
, initial_global_refinement(5)
, max_delta_refinement_level(2)
, mesh_adaptation_frequency(0)
, right_hand_side_function("/Heat Equation/Right hand side")
, initial_value_function("/Heat Equation/Initial value")
, boundary_values_function("/Heat Equation/Boundary values")
{
enter_subsection("Time stepper");
{
enter_my_subsection(this->prm);
{
time_stepper_data.add_parameters(this->prm);
}
leave_my_subsection(this->prm);
}
leave_subsection();
add_parameter("Initial global refinement",
initial_global_refinement,
"Number of times the mesh is refined globally before "
"starting the time stepping.");
add_parameter("Maximum delta refinement level",
max_delta_refinement_level,
"Maximum number of local refinement levels.");
add_parameter("Mesh adaptation frequency",
mesh_adaptation_frequency,
"When to adapt the mesh.");
}
template <int dim>
void HeatEquation<dim>::setup_system(const double time)
{
TimerOutput::Scope t(computing_timer, "setup system");
dof_handler.distribute_dofs(fe);
pcout << std::endl
<< "Number of active cells: " << triangulation.n_active_cells()
<< std::endl
<< "Number of degrees of freedom: " << dof_handler.n_dofs()
<< std::endl
<< std::endl;
locally_owned_dofs = dof_handler.locally_owned_dofs();
locally_relevant_dofs =
hanging_nodes_constraints.clear();
hanging_nodes_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
hanging_nodes_constraints);
hanging_nodes_constraints.make_consistent_in_parallel(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
hanging_nodes_constraints.close();
homogeneous_constraints.clear();
homogeneous_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
homogeneous_constraints.merge(hanging_nodes_constraints);
0,
homogeneous_constraints);
homogeneous_constraints.make_consistent_in_parallel(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
homogeneous_constraints.close();
update_current_constraints(time);
DynamicSparsityPattern dsp(locally_relevant_dofs);
dsp,
homogeneous_constraints,
false);
locally_owned_dofs,
mpi_communicator,
locally_relevant_dofs);
jacobian_matrix.reinit(locally_owned_dofs,
locally_owned_dofs,
dsp,
mpi_communicator);
}
template <int dim>
void HeatEquation<dim>::output_results(const double time,
const unsigned int timestep_number,
{
TimerOutput::Scope t(computing_timer, "output results");
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(y, "U");
data_out.build_patches();
data_out.set_flags(DataOutBase::VtkFlags(time, timestep_number));
const std::string filename =
"solution-" + Utilities::int_to_string(timestep_number, 3) + ".vtu";
data_out.write_vtu_in_parallel(filename, mpi_communicator);
if (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)
{
static std::vector<std::pair<double, std::string>> times_and_names;
times_and_names.emplace_back(time, filename);
std::ofstream pvd_output("solution.pvd");
DataOutBase::write_pvd_record(pvd_output, times_and_names);
}
}
template <int dim>
void
HeatEquation<dim>::implicit_function(const double time,
{
TimerOutput::Scope t(computing_timer, "implicit function");
PETScWrappers::MPI::Vector tmp_solution(locally_owned_dofs,
mpi_communicator);
PETScWrappers::MPI::Vector tmp_solution_dot(locally_owned_dofs,
mpi_communicator);
tmp_solution = y;
tmp_solution_dot = y_dot;
update_current_constraints(time);
current_constraints.distribute(tmp_solution);
homogeneous_constraints.distribute(tmp_solution_dot);
PETScWrappers::MPI::Vector locally_relevant_solution(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
PETScWrappers::MPI::Vector locally_relevant_solution_dot(
locally_owned_dofs, locally_relevant_dofs, mpi_communicator);
locally_relevant_solution = tmp_solution;
locally_relevant_solution_dot = tmp_solution_dot;
const QGauss<dim> quadrature_formula(fe.degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
std::vector<Tensor<1, dim>> solution_gradients(n_q_points);
std::vector<double> solution_dot_values(n_q_points);
Vector<double> cell_residual(dofs_per_cell);
right_hand_side_function.set_time(time);
residual = 0;
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
fe_values.get_function_gradients(locally_relevant_solution,
solution_gradients);
fe_values.get_function_values(locally_relevant_solution_dot,
solution_dot_values);
cell->get_dof_indices(local_dof_indices);
for (const unsigned int q : fe_values.quadrature_point_indices())
for (const unsigned int i : fe_values.dof_indices())
{
(fe_values.shape_value(i, q) * // [phi_i(x_q) *
solution_dot_values[q] // u(x_q)
+ // +
fe_values.shape_grad(i, q) * // grad phi_i(x_q) *
solution_gradients[q] // grad u(x_q)
- // -
fe_values.shape_value(i, q) * // phi_i(x_q) *
right_hand_side_function.value(
fe_values.quadrature_point(q))) // f(x_q)]
* fe_values.JxW(q); // * dx
}
current_constraints.distribute_local_to_global(cell_residual,
local_dof_indices,
residual);
}
for (const auto &c : current_constraints.get_lines())
if (locally_owned_dofs.is_element(c.index))
{
if (c.entries.empty()) /* no dependencies -> a Dirichlet node */
residual[c.index] = y[c.index] - tmp_solution[c.index];
else /* has dependencies -> a hanging node */
residual[c.index] = y[c.index];
}
}
template <int dim>
void HeatEquation<dim>::assemble_implicit_jacobian(
const double /* time */,
const PETScWrappers::MPI::Vector & /* y */,
const PETScWrappers::MPI::Vector & /* y_dot */,
const double alpha)
{
TimerOutput::Scope t(computing_timer, "assemble implicit Jacobian");
const QGauss<dim> quadrature_formula(fe.degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
jacobian_matrix = 0;
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
{
fe_values.reinit(cell);
cell->get_dof_indices(local_dof_indices);
for (const unsigned int q : fe_values.quadrature_point_indices())
for (const unsigned int i : fe_values.dof_indices())
for (const unsigned int j : fe_values.dof_indices())
{
cell_matrix(i, j) +=
(fe_values.shape_grad(i, q) * // grad phi_i(x_q) *
fe_values.shape_grad(j, q) // grad phi_j(x_q)
+ alpha *
fe_values.shape_value(i, q) * // phi_i(x_q) *
fe_values.shape_value(j, q) // phi_j(x_q)
) *
fe_values.JxW(q); // * dx
}
current_constraints.distribute_local_to_global(cell_matrix,
local_dof_indices,
jacobian_matrix);
}
jacobian_matrix.compress(VectorOperation::add);
for (const auto &c : current_constraints.get_lines())
jacobian_matrix.set(c.index, c.index, 1.0);
jacobian_matrix.compress(VectorOperation::insert);
}
template <int dim>
void
HeatEquation<dim>::solve_with_jacobian(const PETScWrappers::MPI::Vector &src,
{
TimerOutput::Scope t(computing_timer, "solve with Jacobian");
#if defined(PETSC_HAVE_HYPRE)
preconditioner.initialize(jacobian_matrix);
#else
preconditioner.initialize(
#endif
SolverControl solver_control(1000, 1e-8 * src.l2_norm());
PETScWrappers::SolverCG cg(solver_control);
cg.set_prefix("user_");
cg.solve(jacobian_matrix, dst, src, preconditioner);
pcout << " " << solver_control.last_step() << " linear iterations."
<< std::endl;
}
template <int dim>
void HeatEquation<dim>::prepare_for_coarsening_and_refinement(
{
PETScWrappers::MPI::Vector locally_relevant_solution(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
locally_relevant_solution = y;
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
QGauss<dim - 1>(fe.degree + 1),
{},
locally_relevant_solution,
estimated_error_per_cell);
triangulation, estimated_error_per_cell, 0.6, 0.4);
const unsigned int max_grid_level =
initial_global_refinement + max_delta_refinement_level;
const unsigned int min_grid_level = initial_global_refinement;
if (triangulation.n_levels() > max_grid_level)
for (const auto &cell :
triangulation.active_cell_iterators_on_level(max_grid_level))
cell->clear_refine_flag();
for (const auto &cell :
triangulation.active_cell_iterators_on_level(min_grid_level))
cell->clear_coarsen_flag();
}
template <int dim>
void HeatEquation<dim>::transfer_solution_vectors_to_new_mesh(
const double time,
const std::vector<PETScWrappers::MPI::Vector> &all_in,
std::vector<PETScWrappers::MPI::Vector> &all_out)
{
solution_trans(dof_handler);
std::vector<PETScWrappers::MPI::Vector> all_in_ghosted(all_in.size());
std::vector<const PETScWrappers::MPI::Vector *> all_in_ghosted_ptr(
all_in.size());
std::vector<PETScWrappers::MPI::Vector *> all_out_ptr(all_in.size());
for (unsigned int i = 0; i < all_in.size(); ++i)
{
all_in_ghosted[i].reinit(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
all_in_ghosted[i] = all_in[i];
all_in_ghosted_ptr[i] = &all_in_ghosted[i];
}
triangulation.prepare_coarsening_and_refinement();
solution_trans.prepare_for_coarsening_and_refinement(all_in_ghosted_ptr);
triangulation.execute_coarsening_and_refinement();
setup_system(time);
all_out.resize(all_in.size());
for (unsigned int i = 0; i < all_in.size(); ++i)
{
all_out[i].reinit(locally_owned_dofs, mpi_communicator);
all_out_ptr[i] = &all_out[i];
}
solution_trans.interpolate(all_out_ptr);
for (PETScWrappers::MPI::Vector &v : all_out)
hanging_nodes_constraints.distribute(v);
}
template <int dim>
void HeatEquation<dim>::update_current_constraints(const double time)
{
if (current_constraints.n_constraints() == 0 ||
time != boundary_values_function.get_time())
{
TimerOutput::Scope t(computing_timer, "update current constraints");
boundary_values_function.set_time(time);
current_constraints.clear();
current_constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
current_constraints.merge(hanging_nodes_constraints);
0,
boundary_values_function,
current_constraints);
current_constraints.make_consistent_in_parallel(locally_owned_dofs,
locally_relevant_dofs,
mpi_communicator);
current_constraints.close();
}
}
template <int dim>
void HeatEquation<dim>::run()
{
triangulation.refine_global(initial_global_refinement);
setup_system(/* time */ 0);
petsc_ts(time_stepper_data);
petsc_ts.set_matrices(jacobian_matrix, jacobian_matrix);
petsc_ts.implicit_function = [&](const double time,
this->implicit_function(time, y, y_dot, res);
};
petsc_ts.setup_jacobian = [&](const double time,
const double alpha) {
this->assemble_implicit_jacobian(time, y, y_dot, alpha);
};
petsc_ts.solve_with_jacobian = [&](const PETScWrappers::MPI::Vector &src,
this->solve_with_jacobian(src, dst);
};
petsc_ts.algebraic_components = [&]() {
IndexSet algebraic_set(dof_handler.n_dofs());
algebraic_set.add_indices(DoFTools::extract_boundary_dofs(dof_handler));
algebraic_set.add_indices(
return algebraic_set;
};
petsc_ts.update_constrained_components =
[&](const double time, PETScWrappers::MPI::Vector &y) {
TimerOutput::Scope t(computing_timer, "set algebraic components");
update_current_constraints(time);
current_constraints.distribute(y);
};
petsc_ts.decide_and_prepare_for_remeshing =
[&](const double /* time */,
const unsigned int step_number,
const PETScWrappers::MPI::Vector &y) -> bool {
if (step_number > 0 && this->mesh_adaptation_frequency > 0 &&
step_number % this->mesh_adaptation_frequency == 0)
{
pcout << std::endl << "Adapting the mesh..." << std::endl;
this->prepare_for_coarsening_and_refinement(y);
return true;
}
else
return false;
};
petsc_ts.transfer_solution_vectors_to_new_mesh =
[&](const double time,
const std::vector<PETScWrappers::MPI::Vector> &all_in,
std::vector<PETScWrappers::MPI::Vector> &all_out) {
this->transfer_solution_vectors_to_new_mesh(time, all_in, all_out);
};
petsc_ts.monitor = [&](const double time,
const unsigned int step_number) {
pcout << "Time step " << step_number << " at t=" << time << std::endl;
this->output_results(time, step_number, y);
};
PETScWrappers::MPI::Vector solution(locally_owned_dofs, mpi_communicator);
VectorTools::interpolate(dof_handler, initial_value_function, solution);
petsc_ts.solve(solution);
}
} // namespace Step86
int main(int argc, char **argv)
{
try
{
using namespace Step86;
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
HeatEquation<2> heat_equation_solver(MPI_COMM_WORLD);
const std::string input_filename =
(argc > 1 ? argv[1] : "heat_equation.prm");
ParameterAcceptor::initialize(input_filename, "heat_equation_used.prm");
heat_equation_solver.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
void write_vtu_in_parallel(const std::string &filename, const MPI_Comm comm) const
void set_flags(const FlagType &flags)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
void reinit(const MPI_Comm communicator, const size_type N, const size_type locally_owned_size, const bool omit_zeroing_entries=false)
void compress(const VectorOperation::values operation)
void implicit_function(Triangulation< dim, 3 > &tria, const Function< 3 > &implicit_function, const CGALWrappers::AdditionalData< dim > &data=CGALWrappers::AdditionalData< dim >{}, const Point< 3 > &interior_point=Point< 3 >(), const double &outer_ball_radius=1.0)
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:74
void cell_residual(Vector< double > &result, const FEValuesBase< dim > &fe, const std::vector< Tensor< 1, dim > > &input, const ArrayView< const std::vector< double > > &velocity, double factor=1.)
Definition advection.h:130