Reference documentation for deal.II version 9.5.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-15 tutorial program

This tutorial depends on step-6.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

This program grew out of a student project by Sven Wetterauer at the University of Heidelberg, Germany. Most of the work for this program is by him.

Introduction

Foreword

This program deals with an example of a non-linear elliptic partial differential equation, the minimal surface equation. You can imagine the solution of this equation to describe the surface spanned by a soap film that is enclosed by a closed wire loop. We imagine the wire to not just be a planar loop, but in fact curved. The surface tension of the soap film will then reduce the surface to have minimal surface. The solution of the minimal surface equation describes this shape with the wire's vertical displacement as a boundary condition. For simplicity, we will here assume that the surface can be written as a graph \(u=u(x,y)\) although it is clear that it is not very hard to construct cases where the wire is bent in such a way that the surface can only locally be constructed as a graph but not globally.

Because the equation is non-linear, we can't solve it directly. Rather, we have to use Newton's method to compute the solution iteratively.

Note
The material presented here is also discussed in video lecture 31.5, video lecture 31.55, video lecture 31.6. (All video lectures are also available here.) (See also video lecture 31.65, video lecture 31.7.)

Classical formulation

In a classical sense, the problem is given in the following form:

\begin{align*} -\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right) &= 0 \qquad \qquad &&\textrm{in} ~ \Omega \\ u&=g \qquad\qquad &&\textrm{on} ~ \partial \Omega. \end{align*}

\(\Omega\) is the domain we get by projecting the wire's positions into \(x-y\) space. In this example, we choose \(\Omega\) as the unit disk.

As described above, we solve this equation using Newton's method in which we compute the \(n\)th approximate solution from the \((n-1)\)th one, and use a damping parameter \(\alpha^n\) to get better global convergence behavior:

\begin{align*} F'(u^{n},\delta u^{n})&=- F(u^{n}) \\ u^{n+1}&=u^{n}+\alpha^n \delta u^{n} \end{align*}

with

\[ F(u) \dealcoloneq -\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right) \]

and \(F'(u,\delta u)\) the derivative of F in direction of \(\delta u\):

\[ F'(u,\delta u)=\lim \limits_{\epsilon \rightarrow 0}{\frac{F(u+\epsilon \delta u)- F(u)}{\epsilon}}. \]

Going through the motions to find out what \(F'(u,\delta u)\) is, we find that we have to solve a linear elliptic PDE in every Newton step, with \(\delta u^n\) as the solution of:

\[ - \nabla \cdot \left( \frac{1}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{1}{2}}}\nabla \delta u^{n} \right) + \nabla \cdot \left( \frac{\nabla u^{n} \cdot \nabla \delta u^{n}}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{3}{2}}} \nabla u^{n} \right) = -\left( - \nabla \cdot \left( \frac{1}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{1}{2}}} \nabla u^{n} \right) \right) \]

In order to solve the minimal surface equation, we have to solve this equation repeatedly, once per Newton step. To solve this, we have to take a look at the boundary condition of this problem. Assuming that \(u^{n}\) already has the right boundary values, the Newton update \(\delta u^{n}\) should have zero boundary conditions, in order to have the right boundary condition after adding both. In the first Newton step, we are starting with the solution \(u^{0}\equiv 0\), the Newton update still has to deliver the right boundary condition to the solution \(u^{1}\).

Summing up, we have to solve the PDE above with the boundary condition \(\delta u^{0}=g\) in the first step and with \(\delta u^{n}=0\) in all the following steps.

Note
In some sense, one may argue that if the program already implements \(F(u)\), it is duplicative to also have to implement \(F'(u,\delta)\). As always, duplication tempts bugs and we would like to avoid it. While we do not explore this issue in this program, we will come back to it at the end of the Possibilities for extensions section below, and specifically in step-72.

Weak formulation of the problem

Starting with the strong formulation above, we get the weak formulation by multiplying both sides of the PDE with a test function \(\varphi\) and integrating by parts on both sides:

\[ \left( \nabla \varphi , \frac{1}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{1}{2}}}\nabla \delta u^{n} \right)-\left(\nabla \varphi ,\frac{\nabla u^{n} \cdot \nabla \delta u^{n}}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{3}{2}}}\nabla u^{n} \right) = -\left(\nabla \varphi , \frac{1}{\left(1+|\nabla u^{n}|^{2}\right)^{\frac{1}{2}}} \nabla u^{n} \right). \]

Here the solution \(\delta u^{n}\) is a function in \(H^{1}(\Omega)\), subject to the boundary conditions discussed above. Reducing this space to a finite dimensional space with basis \(\left\{ \varphi_{0},\dots , \varphi_{N-1}\right\}\), we can write the solution:

\[ \delta u^{n}=\sum_{j=0}^{N-1} \delta U_{j} \varphi_{j}. \]

Using the basis functions as test functions and defining \(a_{n} \dealcoloneq \frac{1} {\sqrt{1+|\nabla u^{n}|^{2}}}\), we can rewrite the weak formulation:

\[ \sum_{j=0}^{N-1}\left[ \left( \nabla \varphi_{i} , a_{n} \nabla \varphi_{j} \right) - \left(\nabla u^{n}\cdot \nabla \varphi_{i} , a_{n}^{3} \nabla u^{n} \cdot \nabla \varphi_{j} \right) \right] \cdot \delta U_{j}=-\left( \nabla \varphi_{i} , a_{n} \nabla u^{n}\right) \qquad \forall i=0,\dots ,N-1, \]

where the solution \(\delta u^{n}\) is given by the coefficients \(\delta U^{n}_{j}\). This linear system of equations can be rewritten as:

\[ A^{n}\; \delta U^{n}=b^{n}, \]

where the entries of the matrix \(A^{n}\) are given by:

\[ A^{n}_{ij} \dealcoloneq \left( \nabla \varphi_{i} , a_{n} \nabla \varphi_{j} \right) - \left(\nabla u^{n}\cdot \nabla \varphi_{i} , a_{n}^{3} \nabla u^{n} \cdot \nabla \varphi_{j} \right), \]

and the right hand side \(b^{n}\) is given by:

\[ b^{n}_{i} \dealcoloneq -\left( \nabla \varphi_{i} , a_{n} \nabla u^{n}\right). \]

Questions about the appropriate solver

The matrix that corresponds to the Newton step above can be reformulated to show its structure a bit better. Rewriting it slightly, we get that it has the form

\[ A_{ij} = \left( \nabla \varphi_i, B \nabla \varphi_j \right), \]

where the matrix \(B\) (of size \(d \times d\) in \(d\) space dimensions) is given by the following expression:

\[ B = a_n \left\{ \mathbf I - a_n^2 [\nabla u_n] \otimes [\nabla u_n] \right\} = a_n \left\{ \mathbf I - \frac{\nabla u_n}{\sqrt{1+|\nabla u^{n}|^{2}}} \otimes \frac{\nabla u_n}{\sqrt{1+|\nabla u^{n}|^{2}}} \right\}. \]

From this expression, it is obvious that \(B\) is symmetric, and so \(A\) is symmetric as well. On the other hand, \(B\) is also positive definite, which confers the same property onto \(A\). This can be seen by noting that the vector \(v_1 = \frac{\nabla u^n}{|\nabla u^n|}\) is an eigenvector of \(B\) with eigenvalue \(\lambda_1=a_n \left(1-\frac{|\nabla u^n|^2}{1+|\nabla u^n|^2}\right) > 0\) while all vectors \(v_2\ldots v_d\) that are perpendicular to \(v_1\) and each other are eigenvectors with eigenvalue \(a_n\). Since all eigenvalues are positive, \(B\) is positive definite and so is \(A\). We can thus use the CG method for solving the Newton steps. (The fact that the matrix \(A\) is symmetric and positive definite should not come as a surprise. It results from taking the derivative of an operator that results from taking the derivative of an energy functional: the minimal surface equation simply minimizes some non-quadratic energy. Consequently, the Newton matrix, as the matrix of second derivatives of a scalar energy, must be symmetric since the derivative with regard to the \(i\)th and \(j\)th degree of freedom should clearly commute. Likewise, if the energy functional is convex, then the matrix of second derivatives must be positive definite, and the direct calculation above simply reaffirms this.)

It is worth noting, however, that the positive definiteness degenerates for problems where \(\nabla u\) becomes large. In other words, if we simply multiply all boundary values by 2, then to first order \(u\) and \(\nabla u\) will also be multiplied by two, but as a consequence the smallest eigenvalue of \(B\) will become smaller and the matrix will become more ill-conditioned. (More specifically, for \(|\nabla u^n|\rightarrow\infty\) we have that \(\lambda_1 \propto a_n \frac{1}{|\nabla u^n|^2}\) whereas \(\lambda_2\ldots \lambda_d=a_n\); thus, the condition number of \(B\), which is a multiplicative factor in the condition number of \(A\) grows like \({\cal O}(|\nabla u^n|^2)\).) It is simple to verify with the current program that indeed multiplying the boundary values used in the current program by larger and larger values results in a problem that will ultimately no longer be solvable using the simple preconditioned CG method we use here.

Choice of step length and globalization

As stated above, Newton's method works by computing a direction \(\delta u^n\) and then performing the update \(u^{n+1} = u^{n}+\alpha^n \delta u^{n}\) with a step length \(0 < \alpha^n \le 1\). It is a common observation that for strongly nonlinear models, Newton's method does not converge if we always choose \(\alpha^n=1\) unless one starts with an initial guess \(u^0\) that is sufficiently close to the solution \(u\) of the nonlinear problem. In practice, we don't always have such an initial guess, and consequently taking full Newton steps (i.e., using \(\alpha=1\)) does frequently not work.

A common strategy therefore is to use a smaller step length for the first few steps while the iterate \(u^n\) is still far away from the solution \(u\) and as we get closer use larger values for \(\alpha^n\) until we can finally start to use full steps \(\alpha^n=1\) as we are close enough to the solution. The question is of course how to choose \(\alpha^n\). There are basically two widely used approaches: line search and trust region methods.

In this program, we simply always choose the step length equal to 0.1. This makes sure that for the testcase at hand we do get convergence although it is clear that by not eventually reverting to full step lengths we forego the rapid, quadratic convergence that makes Newton's method so appealing. Obviously, this is a point one eventually has to address if the program was made into one that is meant to solve more realistic problems. We will comment on this issue some more in the results section, and use an even better approach in step-77.

Summary of the algorithm and testcase

Overall, the program we have here is not unlike step-6 in many regards. The layout of the main class is essentially the same. On the other hand, the driving algorithm in the run() function is different and works as follows:

  1. Start with the function \(u^{0}\equiv 0\) and modify it in such a way that the values of \(u^0\) along the boundary equal the correct boundary values \(g\) (this happens in MinimalSurfaceProblem::set_boundary_values). Set \(n=0\).

  2. Compute the Newton update by solving the system \(A^{n}\;\delta U^{n}=b^{n}\) with boundary condition \(\delta u^{n}=0\) on \(\partial \Omega\).

  3. Compute a step length \(\alpha^n\). In this program, we always set \(\alpha^n=0.1\). To make things easier to extend later on, this happens in a function of its own, namely in MinimalSurfaceProblem::determine_step_length. (The strategy of always choosing \(\alpha^n=0.1\) is of course not optimal – we should choose a step length that works for a given search direction – but it requires a bit of work to do that. In the end, we leave these sorts of things to external packages: step-77 does that.)

  4. The new approximation of the solution is given by \(u^{n+1}=u^{n}+\alpha^n \delta u^{n}\).

  5. If \(n\) is a multiple of 5 then refine the mesh, transfer the solution \(u^{n+1}\) to the new mesh and set the values of \(u^{n+1}\) in such a way that along the boundary we have \(u^{n+1}|_{\partial\Gamma}=g\) (again in MinimalSurfaceProblem::set_boundary_values). Note that this isn't automatically guaranteed even though by construction we had that before mesh refinement \(u^{n+1}|_{\partial\Gamma}=g\) because mesh refinement adds new nodes to the mesh where we have to interpolate the old solution to the new nodes upon bringing the solution from the old to the new mesh. The values we choose by interpolation may be close to the exact boundary conditions but are, in general, nonetheless not the correct values.

  6. Set \(n\leftarrow n+1\) and go to step 2.

The testcase we solve is chosen as follows: We seek to find the solution of minimal surface over the unit disk \(\Omega=\{\mathbf x: \|\mathbf x\|<1\}\subset {\mathbb R}^2\) where the surface attains the values \(u(x,y)|{\partial\Omega} = g(x,y) \dealcoloneq \sin(2 \pi (x+y))\) along the boundary.

The commented program

Include files

The first few files have already been covered in previous examples and will thus not be further commented on.

  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/utilities.h>
 
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/precondition.h>
  #include <deal.II/lac/affine_constraints.h>
 
  #include <deal.II/grid/tria.h>
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/grid_refinement.h>
 
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_tools.h>
 
  #include <deal.II/fe/fe_values.h>
  #include <deal.II/fe/fe_q.h>
 
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/matrix_tools.h>
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/error_estimator.h>
 
 
  #include <fstream>
  #include <iostream>
 
const ::Triangulation< dim, spacedim > & tria

We will use adaptive mesh refinement between Newton iterations. To do so, we need to be able to work with a solution on the new mesh, although it was computed on the old one. The SolutionTransfer class transfers the solution from the old to the new mesh:

  #include <deal.II/numerics/solution_transfer.h>
 

We then open a namespace for this program and import everything from the dealii namespace into it, as in previous programs:

  namespace Step15
  {
  using namespace dealii;
 
 

The MinimalSurfaceProblem class template

The class template is basically the same as in step-6. Three additions are made:

  • There are two solution vectors, one for the Newton update \(\delta u^n\), and one for the current iterate \(u^n\).
  • The setup_system function takes an argument that denotes whether this is the first time it is called or not. The difference is that the first time around we need to distribute the degrees of freedom and set the solution vector for \(u^n\) to the correct size. The following times, the function is called after we have already done these steps as part of refining the mesh in refine_mesh.
  • We then also need new functions: set_boundary_values() takes care of setting the boundary values on the solution vector correctly, as discussed at the end of the introduction. compute_residual() is a function that computes the norm of the nonlinear (discrete) residual. We use this function to monitor convergence of the Newton iteration. The function takes a step length \(\alpha^n\) as argument to compute the residual of \(u^n + \alpha^n \; \delta u^n\). This is something one typically needs for step length control, although we will not use this feature here. Finally, determine_step_length() computes the step length \(\alpha^n\) in each Newton iteration. As discussed in the introduction, we here use a fixed step length and leave implementing a better strategy as an exercise. (step-77 does this differently: It simply uses an external package for the whole solution process, and a good line search strategy is part of what that package provides.)
  template <int dim>
  class MinimalSurfaceProblem
  {
  public:
  MinimalSurfaceProblem();
  void run();
 
  private:
  void setup_system(const bool initial_step);
  void assemble_system();
  void solve();
  void refine_mesh();
  void set_boundary_values();
  double compute_residual(const double alpha) const;
  double determine_step_length() const;
  void output_results(const unsigned int refinement_cycle) const;
 
 
  DoFHandler<dim> dof_handler;
  FE_Q<dim> fe;
 
  AffineConstraints<double> hanging_node_constraints;
 
  SparsityPattern sparsity_pattern;
  SparseMatrix<double> system_matrix;
 
  Vector<double> current_solution;
  Vector<double> newton_update;
  Vector<double> system_rhs;
  };
 
Definition fe_q.h:551
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

Boundary condition

The boundary condition is implemented just like in step-4. It is chosen as \(g(x,y)=\sin(2 \pi (x+y))\):

  template <int dim>
  class BoundaryValues : public Function<dim>
  {
  public:
  virtual double value(const Point<dim> & p,
  const unsigned int component = 0) const override;
  };
 
 
  template <int dim>
  double BoundaryValues<dim>::value(const Point<dim> &p,
  const unsigned int /*component*/) const
  {
  return std::sin(2 * numbers::PI * (p[0] + p[1]));
  }
 
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
Definition point.h:112
static constexpr double PI
Definition numbers.h:259
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)

The MinimalSurfaceProblem class implementation

MinimalSurfaceProblem::MinimalSurfaceProblem

The constructor and destructor of the class are the same as in the first few tutorials.

  template <int dim>
  MinimalSurfaceProblem<dim>::MinimalSurfaceProblem()
  : dof_handler(triangulation)
  , fe(2)
  {}
 
 

MinimalSurfaceProblem::setup_system

As always in the setup-system function, we set up the variables of the finite element method. There are same differences to step-6, because there we start solving the PDE from scratch in every refinement cycle whereas here we need to take the solution from the previous mesh onto the current mesh. Consequently, we can't just reset solution vectors. The argument passed to this function thus indicates whether we can distributed degrees of freedom (plus compute constraints) and set the solution vector to zero or whether this has happened elsewhere already (specifically, in refine_mesh()).

  template <int dim>
  void MinimalSurfaceProblem<dim>::setup_system(const bool initial_step)
  {
  if (initial_step)
  {
  dof_handler.distribute_dofs(fe);
  current_solution.reinit(dof_handler.n_dofs());
 
  hanging_node_constraints.clear();
  hanging_node_constraints);
  hanging_node_constraints.close();
  }
 
 
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)

The remaining parts of the function are the same as in step-6.

  newton_update.reinit(dof_handler.n_dofs());
  system_rhs.reinit(dof_handler.n_dofs());
 
  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  DoFTools::make_sparsity_pattern(dof_handler, dsp);
 
  hanging_node_constraints.condense(dsp);
 
  sparsity_pattern.copy_from(dsp);
  system_matrix.reinit(sparsity_pattern);
  }
 
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints=AffineConstraints< number >(), const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)

MinimalSurfaceProblem::assemble_system

This function does the same as in the previous tutorials except that now, of course, the matrix and right hand side functions depend on the previous iteration's solution. As discussed in the introduction, we need to use zero boundary values for the Newton updates; we compute them at the end of this function.

The top of the function contains the usual boilerplate code, setting up the objects that allow us to evaluate shape functions at quadrature points and temporary storage locations for the local matrices and vectors, as well as for the gradients of the previous solution at the quadrature points. We then start the loop over all cells:

  template <int dim>
  void MinimalSurfaceProblem<dim>::assemble_system()
  {
  const QGauss<dim> quadrature_formula(fe.degree + 1);
 
  system_matrix = 0;
  system_rhs = 0;
 
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
  const unsigned int n_q_points = quadrature_formula.size();
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
 
  std::vector<Tensor<1, dim>> old_solution_gradients(n_q_points);
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  {
  cell_matrix = 0;
  cell_rhs = 0;
 
  fe_values.reinit(cell);
 
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

For the assembly of the linear system, we have to obtain the values of the previous solution's gradients at the quadrature points. There is a standard way of doing this: the FEValues::get_function_gradients function takes a vector that represents a finite element field defined on a DoFHandler, and evaluates the gradients of this field at the quadrature points of the cell with which the FEValues object has last been reinitialized. The values of the gradients at all quadrature points are then written into the second argument:

  fe_values.get_function_gradients(current_solution,
  old_solution_gradients);
 

With this, we can then do the integration loop over all quadrature points and shape functions. Having just computed the gradients of the old solution in the quadrature points, we are able to compute the coefficients \(a_{n}\) in these points. The assembly of the system itself then looks similar to what we always do with the exception of the nonlinear terms, as does copying the results from the local objects into the global ones:

  for (unsigned int q = 0; q < n_q_points; ++q)
  {
  const double coeff =
  1.0 / std::sqrt(1 + old_solution_gradients[q] *
  old_solution_gradients[q]);
 
  for (unsigned int i = 0; i < dofs_per_cell; ++i)
  {
  for (unsigned int j = 0; j < dofs_per_cell; ++j)
  cell_matrix(i, j) +=
  (((fe_values.shape_grad(i, q) // ((\nabla \phi_i
  * coeff // * a_n
  * fe_values.shape_grad(j, q)) // * \nabla \phi_j)
  - // -
  (fe_values.shape_grad(i, q) // (\nabla \phi_i
  * coeff * coeff * coeff // * a_n^3
  * (fe_values.shape_grad(j, q) // * (\nabla \phi_j
  * old_solution_gradients[q]) // * \nabla u_n)
  * old_solution_gradients[q])) // * \nabla u_n)))
  * fe_values.JxW(q)); // * dx
 
  cell_rhs(i) -= (fe_values.shape_grad(i, q) // \nabla \phi_i
  * coeff // * a_n
  * old_solution_gradients[q] // * \nabla u_n
  * fe_values.JxW(q)); // * dx
  }
  }
 
  cell->get_dof_indices(local_dof_indices);
  for (unsigned int i = 0; i < dofs_per_cell; ++i)
  {
  for (unsigned int j = 0; j < dofs_per_cell; ++j)
  system_matrix.add(local_dof_indices[i],
  local_dof_indices[j],
  cell_matrix(i, j));
 
  system_rhs(local_dof_indices[i]) += cell_rhs(i);
  }
  }
 
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:75
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)

Finally, we remove hanging nodes from the system and apply zero boundary values to the linear system that defines the Newton updates \(\delta u^n\):

  hanging_node_constraints.condense(system_matrix);
  hanging_node_constraints.condense(system_rhs);
 
  std::map<types::global_dof_index, double> boundary_values;
  0,
  boundary_values);
  system_matrix,
  newton_update,
  system_rhs);
  }
 
 
 
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask=ComponentMask())

MinimalSurfaceProblem::solve

The solve function is the same as always. At the end of the solution process we update the current solution by setting \(u^{n+1}=u^n+\alpha^n\;\delta u^n\).

  template <int dim>
  void MinimalSurfaceProblem<dim>::solve()
  {
  SolverControl solver_control(system_rhs.size(),
  system_rhs.l2_norm() * 1e-6);
  SolverCG<Vector<double>> solver(solver_control);
 
  preconditioner.initialize(system_matrix, 1.2);
 
  solver.solve(system_matrix, newton_update, system_rhs, preconditioner);
 
  hanging_node_constraints.distribute(newton_update);
 
  const double alpha = determine_step_length();
  current_solution.add(alpha, newton_update);
  }
 
 
void initialize(const MatrixType &A, const AdditionalData &parameters=AdditionalData())

MinimalSurfaceProblem::refine_mesh

The first part of this function is the same as in step-6... However, after refining the mesh we have to transfer the old solution to the new one which we do with the help of the SolutionTransfer class. The process is slightly convoluted, so let us describe it in detail:

  template <int dim>
  void MinimalSurfaceProblem<dim>::refine_mesh()
  {
  Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
 
  dof_handler,
  QGauss<dim - 1>(fe.degree + 1),
  std::map<types::boundary_id, const Function<dim> *>(),
  current_solution,
  estimated_error_per_cell);
 
  estimated_error_per_cell,
  0.3,
  0.03);
 
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, typename InputVector::value_type > * > &neumann_bc, const InputVector &solution, Vector< float > &error, const ComponentMask &component_mask=ComponentMask(), const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void refine_and_coarsen_fixed_number(Triangulation< dim, spacedim > &triangulation, const Vector< Number > &criteria, const double top_fraction_of_cells, const double bottom_fraction_of_cells, const unsigned int max_n_cells=std::numeric_limits< unsigned int >::max())

Then we need an additional step: if, for example, you flag a cell that is once more refined than its neighbor, and that neighbor is not flagged for refinement, we would end up with a jump of two refinement levels across a cell interface. To avoid these situations, the library will silently also have to refine the neighbor cell once. It does so by calling the Triangulation::prepare_coarsening_and_refinement function before actually doing the refinement and coarsening. This function flags a set of additional cells for refinement or coarsening, to enforce rules like the one-hanging-node rule. The cells that are flagged for refinement and coarsening after calling this function are exactly the ones that will actually be refined or coarsened. Usually, you don't have to do this by hand (Triangulation::execute_coarsening_and_refinement does this for you). However, we need to initialize the SolutionTransfer class and it needs to know the final set of cells that will be coarsened or refined in order to store the data from the old mesh and transfer to the new one. Thus, we call the function by hand:

  triangulation.prepare_coarsening_and_refinement();
 

With this out of the way, we initialize a SolutionTransfer object with the present DoFHandler and attach the solution vector to it, followed by doing the actual refinement and distribution of degrees of freedom on the new mesh

  SolutionTransfer<dim> solution_transfer(dof_handler);
  solution_transfer.prepare_for_coarsening_and_refinement(current_solution);
 
  triangulation.execute_coarsening_and_refinement();
 
  dof_handler.distribute_dofs(fe);
 

Finally, we retrieve the old solution interpolated to the new mesh. Since the SolutionTransfer function does not actually store the values of the old solution, but rather indices, we need to preserve the old solution vector until we have gotten the new interpolated values. Thus, we have the new values written into a temporary vector, and only afterwards write them into the solution vector object:

  Vector<double> tmp(dof_handler.n_dofs());
  solution_transfer.interpolate(current_solution, tmp);
  current_solution = tmp;
 

On the new mesh, there are different hanging nodes, for which we have to compute constraints again, after throwing away previous content of the object. To be on the safe side, we should then also make sure that the current solution's vector entries satisfy the hanging node constraints (see the discussion in the documentation of the SolutionTransfer class for why this is necessary). We could do this by calling hanging_node_constraints.distribute(current_solution) explicitly; we omit this step because this will happen at the end of the call to set_boundary_values() below, and it is not necessary to do it twice.

  hanging_node_constraints.clear();
 
  hanging_node_constraints);
  hanging_node_constraints.close();
 

Once we have the interpolated solution and all information about hanging nodes, we have to make sure that the \(u^n\) we now have actually has the correct boundary values. As explained at the end of the introduction, this is not automatically the case even if the solution before refinement had the correct boundary values, and so we have to explicitly make sure that it now has:

  set_boundary_values();
 

We end the function by updating all the remaining data structures, indicating to setup_dofs() that this is not the first go-around and that it needs to preserve the content of the solution vector:

  setup_system(false);
  }
 
 
 

MinimalSurfaceProblem::set_boundary_values

The next function ensures that the solution vector's entries respect the boundary values for our problem. Having refined the mesh (or just started computations), there might be new nodal points on the boundary. These have values that are simply interpolated from the previous mesh in refine_mesh(), instead of the correct boundary values. This is fixed up by setting all boundary nodes of the current solution vector explicit to the right value.

There is one issue we have to pay attention to, though: If we have a hanging node right next to a new boundary node, then its value must also be adjusted to make sure that the finite element field remains continuous. This is what the call in the last line of this function does.

  template <int dim>
  void MinimalSurfaceProblem<dim>::set_boundary_values()
  {
  std::map<types::global_dof_index, double> boundary_values;
  0,
  BoundaryValues<dim>(),
  boundary_values);
  for (auto &boundary_value : boundary_values)
  current_solution(boundary_value.first) = boundary_value.second;
 
  hanging_node_constraints.distribute(current_solution);
  }
 
 
Point< 2 > second
Definition grid_out.cc:4616
Point< 2 > first
Definition grid_out.cc:4615

MinimalSurfaceProblem::compute_residual

In order to monitor convergence, we need a way to compute the norm of the (discrete) residual, i.e., the norm of the vector \(\left<F(u^n),\varphi_i\right>\) with \(F(u)=-\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right)\) as discussed in the introduction. It turns out that (although we don't use this feature in the current version of the program) one needs to compute the residual \(\left<F(u^n+\alpha^n\;\delta u^n),\varphi_i\right>\) when determining optimal step lengths, and so this is what we implement here: the function takes the step length \(\alpha^n\) as an argument. The original functionality is of course obtained by passing a zero as argument.

In the function below, we first set up a vector for the residual, and then a vector for the evaluation point \(u^n+\alpha^n\;\delta u^n\). This is followed by the same boilerplate code we use for all integration operations:

  template <int dim>
  double MinimalSurfaceProblem<dim>::compute_residual(const double alpha) const
  {
  Vector<double> residual(dof_handler.n_dofs());
 
  Vector<double> evaluation_point(dof_handler.n_dofs());
  evaluation_point = current_solution;
  evaluation_point.add(alpha, newton_update);
 
  const QGauss<dim> quadrature_formula(fe.degree + 1);
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
  const unsigned int n_q_points = quadrature_formula.size();
 
  Vector<double> cell_residual(dofs_per_cell);
  std::vector<Tensor<1, dim>> gradients(n_q_points);
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  {
  cell_residual = 0;
  fe_values.reinit(cell);
 

The actual computation is much as in assemble_system(). We first evaluate the gradients of \(u^n+\alpha^n\,\delta u^n\) at the quadrature points, then compute the coefficient \(a_n\), and then plug it all into the formula for the residual:

  fe_values.get_function_gradients(evaluation_point, gradients);
 
 
  for (unsigned int q = 0; q < n_q_points; ++q)
  {
  const double coeff =
  1. / std::sqrt(1 + gradients[q] * gradients[q]);
 
  for (unsigned int i = 0; i < dofs_per_cell; ++i)
  cell_residual(i) -= (fe_values.shape_grad(i, q) // \nabla \phi_i
  * coeff // * a_n
  * gradients[q] // * \nabla u_n
  * fe_values.JxW(q)); // * dx
  }
 
  cell->get_dof_indices(local_dof_indices);
  for (unsigned int i = 0; i < dofs_per_cell; ++i)
  residual(local_dof_indices[i]) += cell_residual(i);
  }
 

At the end of this function we also have to deal with the hanging node constraints and with the issue of boundary values. With regard to the latter, we have to set to zero the elements of the residual vector for all entries that correspond to degrees of freedom that sit at the boundary. The reason is that because the value of the solution there is fixed, they are of course no "real" degrees of freedom and so, strictly speaking, we shouldn't have assembled entries in the residual vector for them. However, as we always do, we want to do exactly the same thing on every cell and so we didn't want to deal with the question of whether a particular degree of freedom sits at the boundary in the integration above. Rather, we will simply set to zero these entries after the fact. To this end, we need to determine which degrees of freedom do in fact belong to the boundary and then loop over all of those and set the residual entry to zero. This happens in the following lines which we have already seen used in step-11, using the appropriate function from namespace DoFTools:

  hanging_node_constraints.condense(residual);
 
  for (const types::global_dof_index i :
  DoFTools::extract_boundary_dofs(dof_handler))
  residual(i) = 0;
 

At the end of the function, we return the norm of the residual:

  return residual.l2_norm();
  }
 
 
 

MinimalSurfaceProblem::determine_step_length

As discussed in the introduction, Newton's method frequently does not converge if we always take full steps, i.e., compute \(u^{n+1}=u^n+\delta u^n\). Rather, one needs a damping parameter (step length) \(\alpha^n\) and set \(u^{n+1}=u^n+\alpha^n\delta u^n\). This function is the one called to compute \(\alpha^n\).

Here, we simply always return 0.1. This is of course a sub-optimal choice: ideally, what one wants is that the step size goes to one as we get closer to the solution, so that we get to enjoy the rapid quadratic convergence of Newton's method. We will discuss better strategies below in the results section, and step-77 also covers this aspect.

  template <int dim>
  double MinimalSurfaceProblem<dim>::determine_step_length() const
  {
  return 0.1;
  }
 
 
 

MinimalSurfaceProblem::output_results

This last function to be called from run() outputs the current solution (and the Newton update) in graphical form as a VTU file. It is entirely the same as what has been used in previous tutorials.

  template <int dim>
  void MinimalSurfaceProblem<dim>::output_results(
  const unsigned int refinement_cycle) const
  {
  DataOut<dim> data_out;
 
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(current_solution, "solution");
  data_out.add_data_vector(newton_update, "update");
  data_out.build_patches();
 
  const std::string filename =
  "solution-" + Utilities::int_to_string(refinement_cycle, 2) + ".vtu";
  std::ofstream output(filename);
  data_out.write_vtu(output);
  }
 
 
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition utilities.cc:471

MinimalSurfaceProblem::run

In the run function, we build the first grid and then have the top-level logic for the Newton iteration.

As described in the introduction, the domain is the unit disk around the origin, created in the same way as shown in step-6. The mesh is globally refined twice followed later on by several adaptive cycles.

Before starting the Newton loop, we also need to do a bit of setup work: We need to create the basic data structures and ensure that the first Newton iterate already has the correct boundary values, as discussed in the introduction.

  template <int dim>
  void MinimalSurfaceProblem<dim>::run()
  {
  triangulation.refine_global(2);
 
  setup_system(/*first time=*/true);
  set_boundary_values();
 
void hyper_ball(Triangulation< dim > &tria, const Point< dim > &center=Point< dim >(), const double radius=1., const bool attach_spherical_manifold_on_boundary_cells=false)

The Newton iteration starts next. We iterate until the (norm of the) residual computed at the end of the previous iteration is less than \(10^{-3}\), as checked at the end of the do { ... } while loop that starts here. Because we don't have a reasonable value to initialize the variable, we just use the largest value that can be represented as a double.

  double last_residual_norm = std::numeric_limits<double>::max();
  unsigned int refinement_cycle = 0;
  do
  {
  std::cout << "Mesh refinement step " << refinement_cycle << std::endl;
 
  if (refinement_cycle != 0)
  refine_mesh();
 

On every mesh we do exactly five Newton steps. We print the initial residual here and then start the iterations on this mesh.

In every Newton step the system matrix and the right hand side have to be computed first, after which we store the norm of the right hand side as the residual to check against when deciding whether to stop the iterations. We then solve the linear system (the function also updates \(u^{n+1}=u^n+\alpha^n\;\delta u^n\)) and output the norm of the residual at the end of this Newton step.

After the end of this loop, we then also output the solution on the current mesh in graphical form and increment the counter for the mesh refinement cycle.

  std::cout << " Initial residual: " << compute_residual(0) << std::endl;
 
  for (unsigned int inner_iteration = 0; inner_iteration < 5;
  ++inner_iteration)
  {
  assemble_system();
  last_residual_norm = system_rhs.l2_norm();
 
  solve();
 
  std::cout << " Residual: " << compute_residual(0) << std::endl;
  }
 
  output_results(refinement_cycle);
 
  ++refinement_cycle;
  std::cout << std::endl;
  }
  while (last_residual_norm > 1e-2);
  }
  } // namespace Step15
 

The main function

Finally the main function. This follows the scheme of all other main functions:

  int main()
  {
  try
  {
  using namespace Step15;
 
  MinimalSurfaceProblem<2> laplace_problem_2d;
  laplace_problem_2d.run();
  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl
  << exc.what() << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
  return 0;
  }

Results

The output of the program looks as follows:

Mesh refinement step 0
Initial residual: 1.53143
Residual: 1.08746
Residual: 0.966748
Residual: 0.859602
Residual: 0.766462
Residual: 0.685475
Mesh refinement step 1
Initial residual: 0.868959
Residual: 0.762125
Residual: 0.677792
Residual: 0.605762
Residual: 0.542748
Residual: 0.48704
Mesh refinement step 2
Initial residual: 0.426445
Residual: 0.382731
Residual: 0.343865
Residual: 0.30918
Residual: 0.278147
Residual: 0.250327
Mesh refinement step 3
Initial residual: 0.282026
Residual: 0.253146
Residual: 0.227414
Residual: 0.20441
Residual: 0.183803
Residual: 0.165319
Mesh refinement step 4
Initial residual: 0.154404
Residual: 0.138723
Residual: 0.124694
Residual: 0.112124
Residual: 0.100847
Residual: 0.0907222
....

Obviously, the scheme converges, if not very fast. We will come back to strategies for accelerating the method below.

One can visualize the solution after each set of five Newton iterations, i.e., on each of the meshes on which we approximate the solution. This yields the following set of images:

Solution after zero cycles with contour lines.
Solution after one cycle with contour lines.
Solution after two cycles with contour lines.
Solution after three cycles with contour lines.

It is clearly visible, that the solution minimizes the surface after each refinement. The solution converges to a picture one would imagine a soap bubble to be that is located inside a wire loop that is bent like the boundary. Also it is visible, how the boundary is smoothed out after each refinement. On the coarse mesh, the boundary doesn't look like a sine, whereas it does the finer the mesh gets.

The mesh is mostly refined near the boundary, where the solution increases or decreases strongly, whereas it is coarsened on the inside of the domain, where nothing interesting happens, because there isn't much change in the solution. The ninth solution and mesh are shown here:

Grid and solution of the ninth cycle with contour lines.

Possibilities for extensions

The program shows the basic structure of a solver for a nonlinear, stationary problem. However, it does not converge particularly fast, for good reasons:

  • The program always takes a step size of 0.1. This precludes the rapid, quadratic convergence for which Newton's method is typically chosen.
  • It does not connect the nonlinear iteration with the mesh refinement iteration.

Obviously, a better program would have to address these two points. We will discuss them in the following.

Step length control

Newton's method has two well known properties:

  • It may not converge from arbitrarily chosen starting points. Rather, a starting point has to be close enough to the solution to guarantee convergence. However, we can enlarge the area from which Newton's method converges by damping the iteration using a step length 0< \(\alpha^n\le 1\).
  • It exhibits rapid convergence of quadratic order if (i) the step length is chosen as \(\alpha^n=1\), and (ii) it does in fact converge with this choice of step length.

A consequence of these two observations is that a successful strategy is to choose \(\alpha^n<1\) for the initial iterations until the iterate has come close enough to allow for convergence with full step length, at which point we want to switch to \(\alpha^n=1\). The question is how to choose \(\alpha^n\) in an automatic fashion that satisfies these criteria.

We do not want to review the literature on this topic here, but only briefly mention that there are two fundamental approaches to the problem: backtracking line search and trust region methods. The former is more widely used for partial differential equations and essentially does the following:

  • Compute a search direction
  • See if the resulting residual of \(u^n + \alpha^n\;\delta u^n\) with \(\alpha^n=1\) is "substantially smaller" than that of \(u^n\) alone.
  • If so, then take \(\alpha^n=1\).
  • If not, try whether the residual is "substantially smaller" with \(\alpha^n=2/3\).
  • If so, then take \(\alpha^n=2/3\).
  • If not, try whether the residual is "substantially smaller" with \(\alpha^n=(2/3)^2\).
  • Etc. One can of course choose other factors \(r, r^2, \ldots\) than the \(2/3, (2/3)^2, \ldots\) chosen above, for \(0<r<1\). It is obvious where the term "backtracking" comes from: we try a long step, but if that doesn't work we try a shorter step, and ever shorter step, etc. The function determine_step_length() is written the way it is to support exactly this kind of use case.

Whether we accept a particular step length \(\alpha^n\) depends on how we define "substantially smaller". There are a number of ways to do so, but without going into detail let us just mention that the most common ones are to use the Wolfe and Armijo-Goldstein conditions. For these, one can show the following:

  • There is always a step length \(\alpha^n\) for which the conditions are satisfied, i.e., the iteration never gets stuck as long as the problem is convex.
  • If we are close enough to the solution, then the conditions allow for \(\alpha^n=1\), thereby enabling quadratic convergence.

We will not dwell on this here any further but leave the implementation of such algorithms as an exercise. We note, however, that when implemented correctly then it is a common observation that most reasonably nonlinear problems can be solved in anywhere between 5 and 15 Newton iterations to engineering accuracy — substantially fewer than we need with the current version of the program.

More details on globalization methods including backtracking can be found, for example, in [87] and [139].

A separate point, very much worthwhile making, however, is that in practice the implementation of efficient nonlinear solvers is about as complicated as the implementation of efficient finite element methods. One should not attempt to reinvent the wheel by implementing all of the necessary steps oneself. Substantial pieces of the puzzle are already available in the LineMinimization::line_search() function and could be used to this end. But, instead, just like building finite element solvers on libraries such as deal.II, one should be building nonlinear solvers on libraries such as SUNDIALS. In fact, deal.II has interfaces to SUNDIALS and in particular to its nonlinear solver sub-package KINSOL through the SUNDIALS::KINSOL class. It would not be very difficult to base the current problem on that interface – indeed, that is what step-77 does.

Integrating mesh refinement and nonlinear and linear solvers

We currently do exactly 5 iterations on each mesh. But is this optimal? One could ask the following questions:

  • Maybe it is worthwhile doing more iterations on the initial meshes since there, computations are cheap.
  • On the other hand, we do not want to do too many iterations on every mesh: yes, we could drive the residual to zero on every mesh, but that would only mean that the nonlinear iteration error is far smaller than the discretization error.
  • Should we use solve the linear systems in each Newton step with higher or lower accuracy?

Ultimately, what this boils down to is that we somehow need to couple the discretization error on the current mesh with the nonlinear residual we want to achieve with the Newton iterations on a given mesh, and to the linear iteration we want to achieve with the CG method within each Newton iterations.

How to do this is, again, not entirely trivial, and we again leave it as a future exercise.

Using automatic differentiation to compute the Jacobian matrix

As outlined in the introduction, when solving a nonlinear problem of the form

\[ F(u) \dealcoloneq -\nabla \cdot \left( \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right) = 0 \]

we use a Newton iteration that requires us to repeatedly solve the linear partial differential equation

\begin{align*} F'(u^{n},\delta u^{n}) &=- F(u^{n}) \end{align*}

so that we can compute the update

\begin{align*} u^{n+1}&=u^{n}+\alpha^n \delta u^{n} \end{align*}

with the solution \(\delta u^{n}\) of the Newton step. For the problem here, we could compute the derivative \(F'(u,\delta u)\) by hand and obtained

\[ F'(u,\delta u) = - \nabla \cdot \left( \frac{1}{\left(1+|\nabla u|^{2}\right)^{\frac{1}{2}}}\nabla \delta u \right) + \nabla \cdot \left( \frac{\nabla u \cdot \nabla \delta u}{\left(1+|\nabla u|^{2}\right)^{\frac{3}{2}}} \nabla u \right). \]

But this is already a sizable expression that is cumbersome both to derive and to implement. It is also, in some sense, duplicative: If we implement what \(F(u)\) is somewhere in the code, then \(F'(u,\delta u)\) is not an independent piece of information but is something that, at least in principle, a computer should be able to infer itself. Wouldn't it be nice if that could actually happen? That is, if we really only had to implement \(F(u)\), and \(F'(u,\delta u)\) was then somehow done implicitly? That is in fact possible, and runs under the name "automatic differentiation". step-71 discusses this very concept in general terms, and step-72 illustrates how this can be applied in practice for the very problem we are considering here.

Storing the Jacobian matrix in lower-precision floating point variables

On modern computer systems, accessing data in main memory takes far longer than actually doing something with it: We can do many floating point operations for the time it takes to load one floating point number from memory onto the processor. Unfortunately, when we do things such as matrix-vector products, we only multiply each matrix entry once with another number (the corresponding entry of the vector) and then we add it to something else – so two floating point operations for one load. (Strictly speaking, we also have to load the corresponding vector entry, but at least sometimes we get to re-use that vector entry in doing the products that correspond to the next row of the matrix.) This is a fairly low "arithmetic intensity", and consequently we spend most of our time during matrix-vector products waiting for data to arrive from memory rather than actually doing floating point operations.

This is of course one of the rationales for the "matrix-free" approach to solving linear systems (see step-37, for example). But if you don't quite want to go all that way to change the structure of the program, then here is a different approach: Storing the system matrix (the "Jacobian") in single-precision instead of double precision floating point numbers (i.e., using float instead of double as the data type). This reduces the amount of memory necessary by a factor of 1.5 (each matrix entry in a SparseMatrix object requires storing the column index – 4 bytes – and the actual value – either 4 or 8 bytes), and consequently will speed up matrix-vector products by a factor of around 1.5 as well because, as pointed out above, most of the time is spent loading data from memory and loading 2/3 the amount of data should be roughly 3/2 times as fast. All of this could be done using SparseMatrix<float> as the data type for the system matrix. (In principle, we would then also like it if the SparseDirectUMFPACK solver we use in this program computes and stores its sparse decomposition in float arithmetic. This is not currently implemented, though could be done.)

Of course, there is a downside to this: Lower precision data storage also implies that we will not solve the linear system of the Newton step as accurately as we might with double precision. At least while we are far away from the solution of the nonlinear problem, this may not be terrible: If we can do a Newton iteration in half the time, we can afford to do a couple more Newton steps if the search directions aren't as good. But it turns out that even that isn't typically necessary: Both theory and computational experience shows that it is entirely sufficient to store the Jacobian matrix in single precision as long as one stores the right hand side in double precision*. A great overview of why this is so, along with numerical experiments that also consider "half precision" floating point numbers, can be found in [107] .

The plain program

/* ---------------------------------------------------------------------
*
* Copyright (C) 2012 - 2023 by the deal.II authors
*
* This file is part of the deal.II library.
*
* The deal.II library is free software; you can use it, redistribute
* it, and/or modify it under the terms of the GNU Lesser General
* Public License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
* The full text of the license can be found in the file LICENSE.md at
* the top level directory of deal.II.
*
* ---------------------------------------------------------------------
*
* Author: Sven Wetterauer, University of Heidelberg, 2012
*/
#include <fstream>
#include <iostream>
namespace Step15
{
using namespace dealii;
template <int dim>
class MinimalSurfaceProblem
{
public:
MinimalSurfaceProblem();
void run();
private:
void setup_system(const bool initial_step);
void assemble_system();
void solve();
void refine_mesh();
void set_boundary_values();
double compute_residual(const double alpha) const;
double determine_step_length() const;
void output_results(const unsigned int refinement_cycle) const;
DoFHandler<dim> dof_handler;
AffineConstraints<double> hanging_node_constraints;
SparsityPattern sparsity_pattern;
SparseMatrix<double> system_matrix;
Vector<double> current_solution;
Vector<double> newton_update;
Vector<double> system_rhs;
};
template <int dim>
class BoundaryValues : public Function<dim>
{
public:
virtual double value(const Point<dim> & p,
const unsigned int component = 0) const override;
};
template <int dim>
double BoundaryValues<dim>::value(const Point<dim> &p,
const unsigned int /*component*/) const
{
return std::sin(2 * numbers::PI * (p[0] + p[1]));
}
template <int dim>
MinimalSurfaceProblem<dim>::MinimalSurfaceProblem()
: dof_handler(triangulation)
, fe(2)
{}
template <int dim>
void MinimalSurfaceProblem<dim>::setup_system(const bool initial_step)
{
if (initial_step)
{
dof_handler.distribute_dofs(fe);
current_solution.reinit(dof_handler.n_dofs());
hanging_node_constraints.clear();
hanging_node_constraints);
hanging_node_constraints.close();
}
newton_update.reinit(dof_handler.n_dofs());
system_rhs.reinit(dof_handler.n_dofs());
DynamicSparsityPattern dsp(dof_handler.n_dofs());
hanging_node_constraints.condense(dsp);
sparsity_pattern.copy_from(dsp);
system_matrix.reinit(sparsity_pattern);
}
template <int dim>
void MinimalSurfaceProblem<dim>::assemble_system()
{
const QGauss<dim> quadrature_formula(fe.degree + 1);
system_matrix = 0;
system_rhs = 0;
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs(dofs_per_cell);
std::vector<Tensor<1, dim>> old_solution_gradients(n_q_points);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
cell_rhs = 0;
fe_values.reinit(cell);
fe_values.get_function_gradients(current_solution,
old_solution_gradients);
for (unsigned int q = 0; q < n_q_points; ++q)
{
const double coeff =
1.0 / std::sqrt(1 + old_solution_gradients[q] *
old_solution_gradients[q]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
cell_matrix(i, j) +=
(((fe_values.shape_grad(i, q) // ((\nabla \phi_i
* coeff // * a_n
* fe_values.shape_grad(j, q)) // * \nabla \phi_j)
- // -
(fe_values.shape_grad(i, q) // (\nabla \phi_i
* coeff * coeff * coeff // * a_n^3
* (fe_values.shape_grad(j, q) // * (\nabla \phi_j
* old_solution_gradients[q]) // * \nabla u_n)
* old_solution_gradients[q])) // * \nabla u_n)))
* fe_values.JxW(q)); // * dx
cell_rhs(i) -= (fe_values.shape_grad(i, q) // \nabla \phi_i
* coeff // * a_n
* old_solution_gradients[q] // * \nabla u_n
* fe_values.JxW(q)); // * dx
}
}
cell->get_dof_indices(local_dof_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
system_matrix.add(local_dof_indices[i],
local_dof_indices[j],
cell_matrix(i, j));
system_rhs(local_dof_indices[i]) += cell_rhs(i);
}
}
hanging_node_constraints.condense(system_matrix);
hanging_node_constraints.condense(system_rhs);
std::map<types::global_dof_index, double> boundary_values;
0,
boundary_values);
system_matrix,
newton_update,
system_rhs);
}
template <int dim>
void MinimalSurfaceProblem<dim>::solve()
{
SolverControl solver_control(system_rhs.size(),
system_rhs.l2_norm() * 1e-6);
SolverCG<Vector<double>> solver(solver_control);
preconditioner.initialize(system_matrix, 1.2);
solver.solve(system_matrix, newton_update, system_rhs, preconditioner);
hanging_node_constraints.distribute(newton_update);
const double alpha = determine_step_length();
current_solution.add(alpha, newton_update);
}
template <int dim>
void MinimalSurfaceProblem<dim>::refine_mesh()
{
Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
dof_handler,
QGauss<dim - 1>(fe.degree + 1),
std::map<types::boundary_id, const Function<dim> *>(),
current_solution,
estimated_error_per_cell);
estimated_error_per_cell,
0.3,
0.03);
triangulation.prepare_coarsening_and_refinement();
SolutionTransfer<dim> solution_transfer(dof_handler);
solution_transfer.prepare_for_coarsening_and_refinement(current_solution);
triangulation.execute_coarsening_and_refinement();
dof_handler.distribute_dofs(fe);
Vector<double> tmp(dof_handler.n_dofs());
solution_transfer.interpolate(current_solution, tmp);
current_solution = tmp;
hanging_node_constraints.clear();
hanging_node_constraints);
hanging_node_constraints.close();
set_boundary_values();
setup_system(false);
}
template <int dim>
void MinimalSurfaceProblem<dim>::set_boundary_values()
{
std::map<types::global_dof_index, double> boundary_values;
0,
BoundaryValues<dim>(),
boundary_values);
for (auto &boundary_value : boundary_values)
current_solution(boundary_value.first) = boundary_value.second;
hanging_node_constraints.distribute(current_solution);
}
template <int dim>
double MinimalSurfaceProblem<dim>::compute_residual(const double alpha) const
{
Vector<double> residual(dof_handler.n_dofs());
Vector<double> evaluation_point(dof_handler.n_dofs());
evaluation_point = current_solution;
evaluation_point.add(alpha, newton_update);
const QGauss<dim> quadrature_formula(fe.degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
Vector<double> cell_residual(dofs_per_cell);
std::vector<Tensor<1, dim>> gradients(n_q_points);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
fe_values.reinit(cell);
fe_values.get_function_gradients(evaluation_point, gradients);
for (unsigned int q = 0; q < n_q_points; ++q)
{
const double coeff =
1. / std::sqrt(1 + gradients[q] * gradients[q]);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
cell_residual(i) -= (fe_values.shape_grad(i, q) // \nabla \phi_i
* coeff // * a_n
* gradients[q] // * \nabla u_n
* fe_values.JxW(q)); // * dx
}
cell->get_dof_indices(local_dof_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
residual(local_dof_indices[i]) += cell_residual(i);
}
hanging_node_constraints.condense(residual);
for (const types::global_dof_index i :
residual(i) = 0;
return residual.l2_norm();
}
template <int dim>
double MinimalSurfaceProblem<dim>::determine_step_length() const
{
return 0.1;
}
template <int dim>
void MinimalSurfaceProblem<dim>::output_results(
const unsigned int refinement_cycle) const
{
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(current_solution, "solution");
data_out.add_data_vector(newton_update, "update");
data_out.build_patches();
const std::string filename =
"solution-" + Utilities::int_to_string(refinement_cycle, 2) + ".vtu";
std::ofstream output(filename);
data_out.write_vtu(output);
}
template <int dim>
void MinimalSurfaceProblem<dim>::run()
{
triangulation.refine_global(2);
setup_system(/*first time=*/true);
set_boundary_values();
double last_residual_norm = std::numeric_limits<double>::max();
unsigned int refinement_cycle = 0;
do
{
std::cout << "Mesh refinement step " << refinement_cycle << std::endl;
if (refinement_cycle != 0)
refine_mesh();
std::cout << " Initial residual: " << compute_residual(0) << std::endl;
for (unsigned int inner_iteration = 0; inner_iteration < 5;
++inner_iteration)
{
assemble_system();
last_residual_norm = system_rhs.l2_norm();
solve();
std::cout << " Residual: " << compute_residual(0) << std::endl;
}
output_results(refinement_cycle);
++refinement_cycle;
std::cout << std::endl;
}
while (last_residual_norm > 1e-2);
}
} // namespace Step15
int main()
{
try
{
using namespace Step15;
MinimalSurfaceProblem<2> laplace_problem_2d;
laplace_problem_2d.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
void write_vtu(std::ostream &out) const
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1063
void extract_boundary_dofs(const DoFHandler< dim, spacedim > &dof_handler, const ComponentMask &component_mask, std::vector< bool > &selected_dofs, const std::set< types::boundary_id > &boundary_ids={})
Definition dof_tools.cc:545
void cell_residual(Vector< double > &result, const FEValuesBase< dim > &fe, const std::vector< Tensor< 1, dim > > &input, const ArrayView< const std::vector< double > > &velocity, double factor=1.)
Definition advection.h:131