This program was contributed by Liang Zhao and Timo Heister
This material is based upon work partially supported by National Science Foundation grant DMS1522191 and the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-0949446 and The University of California-Davis.
Note
If you use this program as a basis for your own work, please consider citing it in your list of references. The initial version of this work was contributed to the deal.II project by the authors listed in the following citation:
Introduction
Navier Stokes Equations
In this tutorial we show how to solve the incompressible Navier Stokes equations (NSE) by Newton's method. The flow we consider here is assumed to be steady. In a domain \(\Omega \subset \mathbb{R}^{d}\), \(d=2,3\), with a piecewise smooth boundary \(\partial \Omega\), and a given force field \(\textbf{f}\), we seek a velocity field \(\textbf{u}\) and a pressure field \(\textbf{p}\) satisfying
Different from the Stokes equations as discussed in step-22, the NSE are a nonlinear system because of the convective term \((\textbf{u} \cdot \nabla)\textbf{u}\). The first step of computing a numerical solution is to linearize the system and this will be done using Newton's method. A time-dependent problem is discussed in step-35, where the system is linearized using the solution from the last time step and no nonlinear solve is necessary.
Linearization of Navier-Stokes Equations
Moving the right-hand side terms to the left, a nonlinear function is created as
\(F(\textbf{u}, p)\) is a nonlinear function whose root is the solution to the NSE. Assuming the initial guess is good enough to guarantee the convergence of Newton's iteration and denoting \(\textbf{x} = (\textbf{u}, p)\), Newton's iteration on a vector field can be defined as
where \(\textbf{x}^{k+1}\) is the approximate solution in step k+1, \(\textbf{x}^{k}\) represents the solution from the last step, and \(\nabla F(\textbf{x}^{k})\) is the Jacobian matrix evaluated at \(\textbf{x}^{k}\). A similar iteration can be found in step-15.
From Newton's iteration formula, we can observe that the new solution is obtained by adding an update term to the old solution. Instead of evaluating the Jacobian matrix and taking its inverse, we consider the update term as a whole, that is
Here, the left of the previous equation represents the directional gradient of \(F(\textbf{x})\) along \(\delta \textbf{x}^{k}\) at \(\textbf{x}^{k}\). By definition, the directional gradient is given by
where \(\textbf{g} =\textbf{f}+\nu \Delta\textbf{u}^k -(\textbf{u}^k \cdot \nabla)\textbf{u}^k -\nabla p^k\) and \(\textbf{u}^k\) and \(p^k\) are the solutions from the previous iteration. Additionally, the right hand side of the second equation is not zero since the discrete solution is not exactly divergence free (divergence free for the continuous solution). The right hand side here acts as a correction which leads the discrete solution of the velocity to be divergence free along Newton's iteration. In this linear system, the only unknowns are the update terms \(\delta \textbf{u}^{k}\) and \(\delta p^{k}\), and we can use a similar strategy to the one used in step-22. The weak form is derived like it is done in step-22.
Now, Newton's iteration can be used to solve for the update terms:
Initialization: Initial guess \(u_0\) and \(p_0\), tolerance \(\tau\);
Linear solve to compute update term \(\delta\textbf{u}^{k}\) and \(\delta p^k\);
Update the approximation: \(\textbf{u}^{k+1} = \textbf{u}^{k} + \delta\textbf{u}^{k}\) and \(p^{k+1} = p^{k} + \delta p^{k}\);
Getting Newton's method to converge, the initial guess needs to be close enough to the solution, so it is crucial to find a good starting value.
When the viscosity \(\nu\) is large, a good initial guess can be obtained by solving the Stokes equation with viscosity \(\nu\). While problem dependent, this works for \(\nu \geq 1/400\) for the test problem considered here.
However, the convective term \((\mathbf{u}\cdot\nabla)\mathbf{u}\) will be dominant if the viscosity is small, like 1/7500 in test case 2. In this situation, we use a continuation method to set up a series of auxiliary NSE with viscosity approaching the one in the target NSE. Correspondingly, we create a sequence \(\{\nu_{i}\}\) with \(\nu_{n}= \nu\), and accept that the solutions to two NSE with viscosity \(\nu_{i}\) and \(\nu_{i+1}\) are close if \(|\nu_{i} - \nu_{i+1}|\) is small. Then we use the solution to the NSE with viscosity \(\nu_{i}\) as the initial guess of the NSE with \(\nu_{i+1}\). This can be thought of as a staircase from the Stokes equations to the NSE we want to solve.
which also acts as the initial guess of the continuation method. Here \(\nu_{1}\) is relatively large so that the solution to the Stokes problem with viscosity \(\nu_{1}\) can be used as an initial guess for the NSE in Newton's iteration.
This process is repeated with a sequence of viscosities, \(\{\nu_i\}\) that is determined experimentally so that the final solution can used as a starting guess for the Newton iteration.
At each step of Newton's iteration, the problem results in solving a saddle point systems of the form
\begin{eqnarray*} \left( \begin{array}{cc} A & B^{T} \\ B & 0 \\ \end{array} \right) \left( \begin{array}{c} U \\ P \\ \end{array} \right) = \left( \begin{array}{c} F \\ 0 \\ \end{array} \right). \end{eqnarray*}
This system matrix has the same block structure as the one in step-22. However, the matrix \(A\) at (1, 1) corner is not symmetric because of the nonlinear term. Instead of solving the above system, we can solve the equivalent system
\begin{eqnarray*} \left( \begin{array}{cc} A + \gamma B^TW^{-1}B & B^{T} \\ B & 0 \\ \end{array} \right) \left( \begin{array}{c} U \\ P \\ \end{array} \right) = \left( \begin{array}{c} F \\ 0 \\ \end{array} \right) \end{eqnarray*}
with a parameter \(\gamma\) and an invertible matrix W. Here \(\gamma B^TW^{-1}B\) is the Augmented Lagrangian term and see [1] for details.
Denoting the system matrix of the new system by \(G\) and the right-hand side by \(b\), we solve it iteratively with right preconditioning \(P^{-1}\) as \(GP^{-1}y = b\), where
with \(\tilde{A} = A + \gamma B^TW^{-1}B\) and \(\tilde{S}\) is the corresponding Schur complement \(\tilde{S} = B^T \tilde{A}^{-1} B\). We let \(W = M_p\) where \(M_p\) is the pressure mass matrix, then \(\tilde{S}^{-1}\) can be approximated by
\begin{eqnarray*} P^{-1} = \left(\begin{array}{cc} \tilde{A}^{-1} & 0 \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & -B^T \\ 0 & I \end{array}\right) \left(\begin{array}{cc} I & 0 \\ 0 & \tilde{S}^{-1} \end{array}\right). \end{eqnarray*}
Here two inexact solvers will be needed for \(\tilde{A}^{-1}\) and \(\tilde{S}^{-1}\), respectively (see [1]). Since the pressure mass matrix is symmetric and positive definite, CG with ILU as a preconditioner is appropriate to use for \(\tilde{S}^{-1}\). For simplicity, we use the direct solver UMFPACK for \(\tilde{A}^{-1}\). The last ingredient is a sparse matrix-vector product with \(B^T\). Instead of computing the matrix product in the augmented Lagrangian term in \(\tilde{A}\), we assemble Grad-Div stabilization \((\nabla \cdot \phi _{i}, \nabla \cdot \phi _{j}) \approx (B^T M_p^{-1}B)_{ij}\), as explained in [2].
Test Case
Here we use the lid driven cavity flow as our test case, see [3] for details. The computational domain is the unit square and the right-hand side \(f=0\). The boundary condition is
When solving this problem, the error consists of the nonlinear error (from Newton's iteration) and the discretization error (depending on mesh size). The nonlinear part decreases with each Newton iteration and the discretization error reduces with mesh refinement. In this example, the solution from the coarse mesh is transferred to successively finer meshes and used as an initial guess. Therefore, the nonlinear error is always brought below the tolerance of Newton's iteration and the discretization error is reduced with each mesh refinement.
Inside the loop, we involve three solvers: one for \(\tilde{A}^{-1}\), one for \(M_p^{-1}\) and one for \(Gx=b\). The first two solvers are invoked in the preconditioner and the outer solver gives us the update term. Overall convergence is controlled by the nonlinear residual and Newton's method does not have to require an exact Jacobian, so for the outer linear solver we employ FGMRES with a relative tolerance of only 1e-4. In fact, we use the truncated Newton solve for this system. As described in step-22, the inner linear solves are also not required to be done very accurately. Here we use CG with a relative tolerance of 1e-6 for the pressure mass matrix. As expected, we still see convergence of the nonlinear residual down to 1e-14. Also, we use a simple line search algorithm for globalization of the Newton method.
The cavity reference values for Re=400 and Re=7500 are from [4] and [5], respectively, where "Re" represents the Reynold number and can be located at [8]. Here the viscosity is defined by 1/Re. Even though we can still find a solution for Re=10000 and the references contain results for comparison, we limit our discussion here to Re=7500. This is because the solution is no longer stationary starting around Re=8000 but instead becomes periodic, see [7] for details.
Reference
An Augmented Lagrangian-Based Approach to the Oseen Problem, M. Benzi and M. Olshanskii, SIAM J. SCI. COMPUT. 2006
Efficient augmented Lagrangian-type preconditioning for the Oseen problem using Grad-Div stabilization, Timo Heister and Gerd Rapin
This is the main function and its member functions. As explained in the introduction, what we obtain at each step is the Newton's update, so we define two variables: the present solution and the update. Additionally, the evaluation point is for temporarily holding Newton update in line search. A sparse matrix for the pressure mass matrix is created for the operator of a block Schur complement preconditioner. We use one ConstraintMatrix for Dirichlet boundary conditions at the initial step and a zero ConstraintMatrix for the Newton is defined by 1/Re which has been discussed in the introduction.
In this problem we set the velocity along the upper surface of the cavity to be one and zero on the other three walls. The right hand side function is zero so we do not need to set the right hand side function in this tutorial. The number of components of the boundary function is dim+1. In practice, the boundary values are applied to our solution through ConstraintMatrix which is obtained by using VectorTools::interpolate_boundary_values. The components of boundary value functions are required to be chosen according to the finite element space. Therefore we have to define the boundary value of pressure even though we actually do not need it.
The following function represents the boundary values:
for (unsignedint c = 0; c < this->n_components; ++c)
values(c) = BoundaryValues<dim>::value (p, c);
}
BlockSchurPreconditioner for Navier Stokes equations
The block Schur complement preconditioner is defined in this part. As discussed in the introduction, the preconditioner in Krylov iterative methods is implemented as a matrix-vector product operator. In practice, the Schur complement preconditioner is decomposed as a product of three matrices (as presented in the first section). The \(\tilde{A}^{-1}\) in the first factor involves a solve for the linear system \(\tilde{A}x=b\). Here we solve this system via a direct solver for simplicity. The computation involved in the second factor is a simple matrix-vector multiplication. The Schur complement \(\tilde{S}\) can be well approximated by the pressure mass matrix and its inverse can be obtained through an inexact solver. Because the pressure mass matrix is symmetric and positive definite, we can use CG to solve the corresponding linear system.
template <class PreconditionerMp>
class BlockSchurPreconditioner : publicSubscriptor
We can notice that the initialization of the inverse of the matrix at (0,0) corner is completed in the constructor. If so, every application of the preconditioner then no longer requires the computation of the matrix factors.
The constructor of this class looks very similar to the one in step-22. The only difference is the viscosity and the Augmented Lagrangian coefficient gamma.
We renumber the components to have all velocity DoFs come before the pressure DoFs to be able to split the solution vector in two blocks which are separately accessed in the block preconditioner.
In Newton's scheme, we first apply the boundary condition on the solution obtained from the initial step. To make sure the boundary conditions remain satisfied during Newton's iteration, zero boundary conditions are used for the update \(\delta u^k\). Therefore we set up two different constraint objects.
This function builds the system matrix and right hand side that we currently work on. The initial_step argument is used to determine which set of constraints we apply (nonzero for the initial step and zero for the others). The assemble_matrix flag determines whether to assemble the whole system or only the right hand side vector, respectively.
For the linearized system, we create temporary storage for present velocity and gradient, and present pressure. In practice, they are all obtained through their shape functions at quadrature points.
The assembly is similar to step-22. An additional term with gamma as a coefficient is the Augmented Lagrangian (AL), which is assembled via grad-div stabilization. As we discussed in the introduction, the bottom right block of the system matrix should be zero. Since the pressure mass matrix is used while creating the preconditioner, we assemble it here and then move it into a separate SparseMatrix at the end (same as in step-22).
Note that settings this pressure block to zero is not identical to not assembling anything in this block, because this operation here will (incorrectly) delete diagonal entries that come in from hanging node constraints for pressure DoFs. This means that our whole system matrix will have rows that are completely zero. Luckily, FGMRES handles these rows without any problem.
In this function, we use FGMRES together with the block preconditioner, which is defined at the beginning of the program, to solve the linear system. What we obtain at this step is the solution vector. If this is the initial step, the solution vector gives us an initial guess for the Navier Stokes equations. For the initial step, nonzero constraints are applied in order to make sure boundary conditions are satisfied. In the following steps, we will solve for the Newton update so zero constraints are used.
After finding a good initial guess on the coarse mesh, we hope to decrease the error through refining the mesh. Here we do adaptive refinement similar to step-15 except that we use the Kelly estimator on the velocity only. We also need to transfer the current solution to the next mesh using the SolutionTransfer class.
First the DoFHandler is set up and constraints are generated. Then we create a temporary vector "tmp", whose size is according with the solution on the new mesh.
Transfer solution from coarse to fine mesh and apply boundary value constraints to the new transfered solution. Note that present_solution is still a vector corresponding to the old mesh.
Finally set up matrix and vectors and set the present_solution to the interpolated data.
initialize_system();
present_solution = tmp;
}
StationaryNavierStokes<dim>::newton_iteration
This function implements the Newton iteration with given tolerance, maximum number of iterations, and the number of mesh refinements to do. "is_initial_step" is the flag to tell us whether "setup_system" is necessary, and which part, system matrix or right hand side vector, should be assembled. If we do a line search, the right hand side is already assembled while checking the residual norm in the last iteration. Therefore, we just need to assemble the system matrix at the current iteration. The last argument "output_result" is whether output should be produced.
To make sure our solution is getting close to the exact solution, we let the solution be updated with a weight alpha such that the new residual is smaller than the one of last step, which is done in the following loop. Also the line search method can be located in step-15.
This function will provide us with an initial guess by using a continuation method as we discussed in the introduction. The Reynolds number is increased step-by-step until we reach the target value. By experiment, the solution to Stokes is good enough to be the initial guess of NSE with Reynolds number 1000 so we start there. To make sure the solution from previous problem is close enough to the next one, the step size must be small enough.
In our test case, we do not know the analytical solution. This function outputs the velocity components along x=0.5 and y from 0 to 1 so they can be compared with data from the literature.
This is the last step of this program. In this part, we generate the grid and run the other functions respectively. The max refinement can be set by the argument.
If the viscosity is smaller than 1/1000, we have to first search for an initial guess via a continuation method. What we should notice is the search is always on the initial mesh, that is the \(8 \times 8\) mesh in this program. After that, we just do the same as we did when viscosity is larger than 1/1000: run Newton's iteration, refine the mesh, transfer solutions, and repeat.
When the viscosity is larger than 1/1000, the solution to Stokes equations is good enough as an initial guess. If so, we do not need to search for the initial guess using a continuation method. Newton's iteration can be started directly.
Now we use the method we discussed above to solve Navier Stokes equations with viscosity 1/400 and 1/7500.
Test case 1: Re=400
In the first test case the viscosity is set to be 1/400. As we discussed in the introduction, the initial guess is the solution to the corresponding Stokes problem. In the following table, the residuals at each Newton's iteration on every mesh is shown. The data in the table shows that Newton's iteration converges quadratically.
Re=400
Mesh0
Mesh1
Mesh2
Mesh3
Mesh4
Newton iter
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
1
3.7112e-03
5
6.4189e-03
3
2.4338e-03
3
1.0570e-03
3
4.9499e-04
3
2
7.0849e-04
5.0000e+00
9.9458e-04
5
1.1409e-04
6
1.3544e-05
6
1.4171e-06
6
3
1.9980e-05
5.0000e+00
4.5007e-05
5
2.9020e-08
5
4.4021e-10
6
6.3435e-11
6
4
2.3165e-09
6.0000e+00
1.6891e-07
5
1.2338e-14
7
1.8506e-14
8
8.8563e-15
8
5
1.2585e-13
7.0000e+00
1.4520e-11
6
1.9044e-13
8
6
1.3998e-15
8
The following figures show the sequence of generated grids. For the case of Re=400, the initial guess is obtained by solving Stokes on an \(8 \times 8\) mesh, and the mesh is refined adaptively. Between meshes, the solution from the coarse mesh is interpolated to the fine mesh to be used as an initial guess.
This picture is the graphical streamline result of lid-driven cavity with Re=400.
Then the solution is compared with a reference solution from [4] and the reference solution data can be found in the file "ref_2d_ghia_u.txt".
Test case 2: Re=7500
Newton's iteration requires a good initial guess. However, the nonlinear term dominates when the Reynold number is large, so that the solution to the Stokes equations may be far away from the exact solution. If the Stokes solution acts as the initial guess, the convergence will be lost. The following picture shows that the nonlinear iteration gets stuck and the residual no longer decreases in further iterations.
The initial guess, therefore, has to be obtained via a continuation method which has been discussed in the introduction. Here the step size in the continuation method, that is \(|\nu_{i}-\nu_{i+1}|\), is 2000 and the initial mesh is of size \(32 \times 32\). After obtaining an initial guess, the mesh is refined as in the previous test case. The following picture shows that at each refinement Newton's iteration has quadratic convergence. 52 steps of Newton's iterations are executed for solving this test case.
We also show the residual from each step of Newton's iteration on every mesh. The quadratic convergence is clearly visible in the table.
Re=7500
Mesh0
Mesh1
Mesh2
Mesh3
Mesh4
Newton iter
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
Residual
FGMRES
1
1.8922e-06
6
4.2506e-03
3
1.4299e-03
3
4.8793e-04
2
1.8998e-04
2
2
3.1644e-09
8
1.3732e-03
7
4.1506e-04
7
9.1119e-05
8
1.3555e-05
8
3
1.7611e-14
9
2.1946e-04
6
1.7881e-05
6
5.2678e-07
7
9.3739e-09
7
4
8.8269e-06
6
6.8210e-09
7
2.2770e-11
8
1.2588e-13
9
5
1.2974e-07
7
1.2515e-13
9
1.7801e-14
1
6
4.4352e-11
7
7
6.2863e-15
9
The sequence of generated grids looks like this:
We compare our solution with reference solution from [5].
The following picture presents the graphical result.
Furthermore, the error consists of the nonlinear error, which is decreasing as Newton's iteration going on, and the discretization error, which depends on the mesh size. That is why we have to refine the mesh and repeat Newton's iteration on the next finer mesh. From the table above, we can see that the residual (nonlinear error) is below 1e-12 on each mesh, but the following picture shows us the difference between solutions on subsequently finer meshes.
Possibilities for extensions
Compare to other solvers
It is easy to compare the currently implemented linear solver to just using UMFPACK for the whole linear system. You need to remove the nullspace containing the constant pressures and it is done in step-56. More interesting is the comparison to other state of the art preconditioners like PCD. It turns out that the preconditioner here is very competitive, as can be seen in the paper [2].
The following table shows the timing results between our iterative approach (FGMRES) compared to a direct solver (UMFPACK) for the whole system with viscosity set to 1/400. Even though we use the same direct solver for the velocity block in the iterative solver, it is considerably faster and consumes less memory. This will be even more pronounced in 3d.
Ref
DoFs
Iterative: Total/s (Setup/s)
Direct: Total/s (Setup/s)
5
9539
0.10 (0.06)
0.13 (0.12)
6
37507
0.58 (0.37)
1.03 (0.97)
7
148739
3.59 (2.73)
7.78 (7.53)
8
592387
29.17 (24.94)
(>4GB RAM)
3D computations
The code is set up to also run in 3d. Of course the reference values are different, see [6] for example. High resolution computations are not doable with this example as is, because a direct solver for the velocity block does not work well in 3d. Rather, a parallel solver based on algebraic or geometric multigrid is needed. See below.
Parallelization
For larger computations, especially in 3D, it is necessary to implement MPI parallel solvers and preconditioners. A good starting point would be step-55, which uses algebraic multigrid for the velocity block for the Stokes equations.