Reference documentation for deal.II version 9.2.0
|
This tutorial depends on step-15, step-22.
This program was contributed by Liang Zhao and Timo Heister.
This material is based upon work partially supported by National Science Foundation grant DMS1522191 and the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-0949446 and The University of California-Davis.
In this tutorial we show how to solve the incompressible Navier Stokes equations (NSE) with Newton's method. The flow we consider here is assumed to be steady. In a domain \(\Omega \subset \mathbb{R}^{d}\), \(d=2,3\), with a piecewise smooth boundary \(\partial \Omega\), and a given force field \(\textbf{f}\), we seek a velocity field \(\textbf{u}\) and a pressure field \(\textbf{p}\) satisfying
\begin{eqnarray*} - \nu \Delta\textbf{u} + (\textbf{u} \cdot \nabla)\textbf{u} + \nabla p &=& \textbf{f}\\ - \nabla \cdot \textbf{u} &=& 0. \end{eqnarray*}
Unlike the Stokes equations as discussed in step-22, the NSE are a nonlinear system of equations because of the convective term \((\textbf{u} \cdot \nabla)\textbf{u}\). The first step of computing a numerical solution is to linearize the system and this will be done using Newton's method. A time-dependent problem is discussed in step-35, where the system is linearized using the solution from the last time step and no nonlinear solve is necessary.
We define a nonlinear function whose root is a solution to the NSE by
\begin{eqnarray*} F(\mathbf{u}, p) = \begin{pmatrix} - \nu \Delta\mathbf{u} + (\mathbf{u} \cdot \nabla)\mathbf{u} + \nabla p - \mathbf{f} \\ - \nabla \cdot \mathbf{u} \end{pmatrix}. \end{eqnarray*}
Assuming the initial guess is good enough to guarantee the convergence of Newton's iteration and denoting \(\textbf{x} = (\textbf{u}, p)\), Newton's iteration on a vector function can be defined as
\begin{eqnarray*} \textbf{x}^{k+1} = \textbf{x}^{k} - (\nabla F(\textbf{x}^{k}))^{-1} F(\textbf{x}^{k}), \end{eqnarray*}
where \(\textbf{x}^{k+1}\) is the approximate solution in step \(k+1\), \(\textbf{x}^{k}\) represents the solution from the previous step, and \(\nabla F(\textbf{x}^{k})\) is the Jacobian matrix evaluated at \(\textbf{x}^{k}\). A similar iteration can be found in step-15.
The Newton iteration formula implies the new solution is obtained by adding an update term to the old solution. Instead of evaluating the Jacobian matrix and taking its inverse, we consider the update term as a whole, that is
\begin{eqnarray*} \delta \textbf{x}^{k} = - (\nabla F(\textbf{x}^{k}))^{-1} F(\textbf{x}^{k}), \end{eqnarray*}
where \(\textbf{x}^{k+1}=\textbf{x}^{k}+\delta \textbf{x}^{k}\).
We can find the update term by solving the system
\begin{eqnarray*} \nabla F(\textbf{x}^{k}) \delta \textbf{x}^{k} = -F(\textbf{x}^{k}). \end{eqnarray*}
Here, the left of the previous equation represents the directional gradient of \(F(\textbf{x})\) along \(\delta \textbf{x}^{k}\) at \(\textbf{x}^{k}\). By definition, the directional gradient is given by
\begin{eqnarray*} & &\nabla F(\mathbf{u}^{k}, p^{k}) (\delta \mathbf{u}^{k}, \delta p^{k}) \\ \\ &=& \lim_{\epsilon \to 0} \frac{1}{\epsilon} \left( F(\mathbf{u}^{k} + \epsilon \delta \mathbf{u}^{k}, p^{k} + \epsilon \nabla \delta p^{k}) - F(\mathbf{u}^{k}, p^{k}) \right)\\ \\ &=& \lim_{\epsilon \to 0} \frac{1}{\epsilon} \begin{pmatrix} - \epsilon \nu \Delta \delta \mathbf{u}^{k} + \epsilon \mathbf{u}^{k} \cdot \nabla \delta \mathbf{u}^{k} + \epsilon \delta \mathbf{u}^{k} \cdot \nabla \mathbf{u}^{k} + \epsilon^{2} \delta \mathbf{u}^{k} \cdot \nabla \delta \mathbf{u}^{k} + \epsilon \nabla \delta p^{k}\\ - \epsilon \nabla \cdot \delta \mathbf{u}^{k} \end{pmatrix} \\ \\ &=& \begin{pmatrix} - \nu \Delta \delta \mathbf{u}^{k} + \mathbf{u}^{k} \cdot \nabla \delta \mathbf{u}^{k} + \delta \mathbf{u}^{k} \cdot \nabla \mathbf{u}^{k} + \nabla \delta p^{k}\\ - \nabla \cdot \delta \mathbf{u}^{k} \end{pmatrix}. \end{eqnarray*}
Therefore, we arrive at the linearized system:
\begin{eqnarray*} -\nu \Delta \delta \mathbf{u}^{k} + \mathbf{u}^{k} \cdot \nabla \delta \mathbf{u}^{k} + \delta \mathbf{u}^{k} \cdot \nabla \mathbf{u}^{k} + \nabla \delta p^{k} = -F(\mathbf{x}^k), \\ -\nabla \cdot\delta \mathbf{u}^{k} = \nabla \cdot \mathbf{u}^{k}, \end{eqnarray*}
where \(\textbf{u}^k\) and \(p^k\) are the solutions from the previous iteration. Additionally, the right hand side of the second equation is not zero since the discrete solution is not exactly divergence free (divergence free for the continuous solution). The right hand side here acts as a correction which leads the discrete solution of the velocity to be divergence free along Newton's iteration. In this linear system, the only unknowns are the update terms \(\delta \textbf{u}^{k}\) and \(\delta p^{k}\), and we can use a similar strategy to the one used in step-22 (and derive the weak form in the same way).
Now, Newton's iteration can be used to solve for the update terms:
The initial guess needs to be close enough to the solution for Newton's method to converge; hence, finding a good starting value is crucial to the nonlinear solver.
When the viscosity \(\nu\) is large, a good initial guess can be obtained by solving the Stokes equation with viscosity \(\nu\). While problem dependent, this works for \(\nu \geq 1/400\) for the test problem considered here.
However, the convective term \((\mathbf{u}\cdot\nabla)\mathbf{u}\) will be dominant if the viscosity is small, like \(1/7500\) in test case 2. In this situation, we use a continuation method to set up a series of auxiliary NSEs with viscosity approaching the one in the target NSE. Correspondingly, we create a sequence \(\{\nu_{i}\}\) with \(\nu_{n}= \nu\), and accept that the solutions to two NSE with viscosity \(\nu_{i}\) and \(\nu_{i+1}\) are close if \(|\nu_{i} - \nu_{i+1}|\) is small. Then we use the solution to the NSE with viscosity \(\nu_{i}\) as the initial guess of the NSE with \(\nu_{i+1}\). This can be thought of as a staircase from the Stokes equations to the NSE we want to solve.
That is, we first solve a Stokes problem
\begin{eqnarray*} -\nu_{1} \Delta \textbf{u} + \nabla p &=& \textbf{f}\\ -\nabla \cdot \textbf{u} &=& 0 \end{eqnarray*}
to get the initial guess for
\begin{eqnarray*} -\nu_{1} \Delta \textbf{u} + (\textbf{u} \cdot \nabla)\textbf{u} + \nabla p &=& \textbf{f},\\ -\nabla \cdot \textbf{u} &=& 0, \end{eqnarray*}
which also acts as the initial guess of the continuation method. Here \(\nu_{1}\) is relatively large so that the solution to the Stokes problem with viscosity \(\nu_{1}\) can be used as an initial guess for the NSE in Newton's iteration.
Then the solution to
\begin{eqnarray*} -\nu_{i} \Delta \textbf{u} + (\textbf{u} \cdot \nabla)\textbf{u} + \nabla p &=& \textbf{f},\\ -\nabla \cdot \textbf{u} &=& 0. \end{eqnarray*}
acts as the initial guess for
\begin{eqnarray*} -\nu_{i+1} \Delta \textbf{u} + (\textbf{u} \cdot \nabla)\textbf{u} + \nabla p &=& \textbf{f},\\ -\nabla \cdot \textbf{u} &=& 0. \end{eqnarray*}
This process is repeated with a sequence of viscosities \(\{\nu_i\}\) that is determined experimentally so that the final solution can used as a starting guess for the Newton iteration.
At each step of Newton's iteration, the problem results in solving a saddle point systems of the form
\begin{eqnarray*} \begin{pmatrix} A & B^{T} \\ B & 0 \end{pmatrix} \begin{pmatrix} U \\ P \end{pmatrix} = \begin{pmatrix} F \\ 0 \end{pmatrix}. \end{eqnarray*}
This system matrix has the same block structure as the one in step-22. However, the matrix \(A\) at the top left corner is not symmetric because of the nonlinear term. Instead of solving the above system, we can solve the equivalent system
\begin{eqnarray*} \begin{pmatrix} A + \gamma B^TW^{-1}B & B^{T} \\ B & 0 \end{pmatrix} \begin{pmatrix} U \\ P \end{pmatrix} = \begin{pmatrix} F \\ 0 \end{pmatrix} \end{eqnarray*}
with a parameter \(\gamma\) and an invertible matrix \(W\). Here \(\gamma B^TW^{-1}B\) is the Augmented Lagrangian term; see [1] for details.
Denoting the system matrix of the new system by \(G\) and the right-hand side by \(b\), we solve it iteratively with right preconditioning \(P^{-1}\) as \(GP^{-1}y = b\), where
\begin{eqnarray*} P^{-1} = \begin{pmatrix} \tilde{A} & B^T \\ 0 & \tilde{S} \end{pmatrix}^{-1} \end{eqnarray*}
with \(\tilde{A} = A + \gamma B^TW^{-1}B\) and \(\tilde{S}\) is the corresponding Schur complement \(\tilde{S} = B^T \tilde{A}^{-1} B\). We let \(W = M_p\) where \(M_p\) is the pressure mass matrix, then \(\tilde{S}^{-1}\) can be approximated by
\begin{eqnarray*} \tilde{S}^{-1} \approx -(\nu+\gamma)M_p^{-1}. \end{eqnarray*}
See [1] for details.
We decompose \(P^{-1}\) as
\begin{eqnarray*} P^{-1} = \begin{pmatrix} \tilde{A}^{-1} & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} I & -B^T \\ 0 & I \end{pmatrix} \begin{pmatrix} I & 0 \\ 0 & \tilde{S}^{-1} \end{pmatrix}. \end{eqnarray*}
Here two inexact solvers will be needed for \(\tilde{A}^{-1}\) and \(\tilde{S}^{-1}\), respectively (see [1]). Since the pressure mass matrix is symmetric and positive definite, CG with ILU as a preconditioner is appropriate to use for \(\tilde{S}^{-1}\). For simplicity, we use the direct solver UMFPACK for \(\tilde{A}^{-1}\). The last ingredient is a sparse matrix-vector product with \(B^T\). Instead of computing the matrix product in the augmented Lagrangian term in \(\tilde{A}\), we assemble Grad-Div stabilization \((\nabla \cdot \phi _{i}, \nabla \cdot \phi _{j}) \approx (B^T M_p^{-1}B)_{ij}\), as explained in [2].
We use the lid driven cavity flow as our test case; see [3] for details. The computational domain is the unit square and the right-hand side is \(f=0\). The boundary condition is
\begin{eqnarray*} (u(x, y), v(x,y)) &=& (1,0) \qquad\qquad \textrm{if}\ y = 1 \\ (u(x, y), v(x,y)) &=& (0,0) \qquad\qquad \textrm{otherwise}. \end{eqnarray*}
When solving this problem, the error consists of the nonlinear error (from Newton's iteration) and the discretization error (dependent on mesh size). The nonlinear part decreases with each Newton iteration and the discretization error reduces with mesh refinement. In this example, the solution from the coarse mesh is transferred to successively finer meshes and used as an initial guess. Therefore, the nonlinear error is always brought below the tolerance of Newton's iteration and the discretization error is reduced with each mesh refinement.
Inside the loop, we involve three solvers: one for \(\tilde{A}^{-1}\), one for \(M_p^{-1}\) and one for \(Gx=b\). The first two solvers are invoked in the preconditioner and the outer solver gives us the update term. Overall convergence is controlled by the nonlinear residual; as Newton's method does not require an exact Jacobian, we employ FGMRES with a relative tolerance of only 1e-4 for the outer linear solver. In fact, we use the truncated Newton solve for this system. As described in step-22, the inner linear solves are also not required to be done very accurately. Here we use CG with a relative tolerance of 1e-6 for the pressure mass matrix. As expected, we still see convergence of the nonlinear residual down to 1e-14. Also, we use a simple line search algorithm for globalization of the Newton method.
The cavity reference values for \(\mathrm{Re}=400\) and \(\mathrm{Re}=7500\) are from [4] and [5], respectively, where \(\mathrm{Re}\) is the Reynolds number and can be located at [8]. Here the viscosity is defined by \(1/\mathrm{Re}\). Even though we can still find a solution for \(\mathrm{Re}=10000\) and the references contain results for comparison, we limit our discussion here to \(\mathrm{Re}=7500\). This is because the solution is no longer stationary starting around \(\mathrm{Re}=8000\) but instead becomes periodic, see [7] for details.
As usual, we start by including some well-known files:
To transfer solutions between meshes, this file is included:
This file includes UMFPACK: the direct solver:
And the one for ILU preconditioner:
NavierStokesProblem
class templateThis class manages the matrices and vectors described in the introduction: in particular, we store a BlockVector for the current solution, current Newton update, and the line search update. We also store two AffineConstraints objects: one which enforces the Dirichlet boundary conditions and one that sets all boundary values to zero. The first constrains the solution vector while the second constraints the updates (i.e., we never update boundary values, so we force the relevant update vector values to be zero).
In this problem we set the velocity along the upper surface of the cavity to be one and zero on the other three walls. The right hand side function is zero so we do not need to set the right hand side function in this tutorial. The number of components of the boundary function is dim+1
. We will ultimately use VectorTools::interpolate_boundary_values to set boundary values, which requires the boundary value functions to have the same number of components as the solution, even if all are not used. Put another way: to make this function happy we define boundary values for the pressure even though we will never actually use them.
As discussed in the introduction, the preconditioner in Krylov iterative methods is implemented as a matrix-vector product operator. In practice, the Schur complement preconditioner is decomposed as a product of three matrices (as presented in the first section). The \(\tilde{A}^{-1}\) in the first factor involves a solve for the linear system \(\tilde{A}x=b\). Here we solve this system via a direct solver for simplicity. The computation involved in the second factor is a simple matrix-vector multiplication. The Schur complement \(\tilde{S}\) can be well approximated by the pressure mass matrix and its inverse can be obtained through an inexact solver. Because the pressure mass matrix is symmetric and positive definite, we can use CG to solve the corresponding linear system.
We can notice that the initialization of the inverse of the matrix at the top left corner is completed in the constructor. If so, every application of the preconditioner then no longer requires the computation of the matrix factors.
The constructor of this class looks very similar to the one in step-22. The only difference is the viscosity and the Augmented Lagrangian coefficient gamma
.
This function initializes the DoFHandler enumerating the degrees of freedom and constraints on the current mesh.
The first step is to associate DoFs with a given mesh.
We renumber the components to have all velocity DoFs come before the pressure DoFs to be able to split the solution vector in two blocks which are separately accessed in the block preconditioner.
In Newton's scheme, we first apply the boundary condition on the solution obtained from the initial step. To make sure the boundary conditions remain satisfied during Newton's iteration, zero boundary conditions are used for the update \(\delta u^k\). Therefore we set up two different constraint objects.
On each mesh the SparsityPattern and the size of the linear system are different. This function initializes them after mesh refinement.
This function builds the system matrix and right hand side that we currently work on. The initial_step
argument is used to determine which set of constraints we apply (nonzero for the initial step and zero for the others). The assemble_matrix
argument determines whether to assemble the whole system or only the right hand side vector, respectively.
For the linearized system, we create temporary storage for present velocity and gradient, and present pressure. In practice, they are all obtained through their shape functions at quadrature points.
The assembly is similar to step-22. An additional term with gamma as a coefficient is the Augmented Lagrangian (AL), which is assembled via grad-div stabilization. As we discussed in the introduction, the bottom right block of the system matrix should be zero. Since the pressure mass matrix is used while creating the preconditioner, we assemble it here and then move it into a separate SparseMatrix at the end (same as in step-22).
Finally we move pressure mass matrix into a separate matrix:
Note that settings this pressure block to zero is not identical to not assembling anything in this block, because this operation here will (incorrectly) delete diagonal entries that come in from hanging node constraints for pressure DoFs. This means that our whole system matrix will have rows that are completely zero. Luckily, FGMRES handles these rows without any problem.
In this function, we use FGMRES together with the block preconditioner, which is defined at the beginning of the program, to solve the linear system. What we obtain at this step is the solution vector. If this is the initial step, the solution vector gives us an initial guess for the Navier Stokes equations. For the initial step, nonzero constraints are applied in order to make sure boundary conditions are satisfied. In the following steps, we will solve for the Newton update so zero constraints are used.
After finding a good initial guess on the coarse mesh, we hope to decrease the error through refining the mesh. Here we do adaptive refinement similar to step-15 except that we use the Kelly estimator on the velocity only. We also need to transfer the current solution to the next mesh using the SolutionTransfer class.
First the DoFHandler is set up and constraints are generated. Then we create a temporary BlockVector tmp
, whose size is according with the solution on the new mesh.
Transfer solution from coarse to fine mesh and apply boundary value constraints to the new transferred solution. Note that present_solution is still a vector corresponding to the old mesh.
Finally set up matrix and vectors and set the present_solution to the interpolated data.
This function implements the Newton iteration with given tolerance, maximum number of iterations, and the number of mesh refinements to do.
The argument is_initial_step
tells us whether setup_system
is necessary, and which part, system matrix or right hand side vector, should be assembled. If we do a line search, the right hand side is already assembled while checking the residual norm in the last iteration. Therefore, we just need to assemble the system matrix at the current iteration. The last argument output_result
determines whether or not graphical output should be produced.
To make sure our solution is getting close to the exact solution, we let the solution be updated with a weight alpha
such that the new residual is smaller than the one of last step, which is done in the following loop. This is the same line search algorithm used in step-15.
This function will provide us with an initial guess by using a continuation method as we discussed in the introduction. The Reynolds number is increased step-by-step until we reach the target value. By experiment, the solution to Stokes is good enough to be the initial guess of NSE with Reynolds number 1000 so we start there. To make sure the solution from previous problem is close enough to the next one, the step size must be small enough.
This function is the same as in step-22 except that we choose a name for the output file that also contains the Reynolds number (i.e., the inverse of the viscosity in the current context).
In our test case, we do not know the analytical solution. This function outputs the velocity components along \(x=0.5\) and \(0 \leq y \leq 1\) so they can be compared with data from the literature.
This is the last step of this program. In this part, we generate the grid and run the other functions respectively. The max refinement can be set by the argument.
If the viscosity is smaller than \(1/1000\), we have to first search for an initial guess via a continuation method. What we should notice is the search is always on the initial mesh, that is the \(8 \times 8\) mesh in this program. After that, we just do the same as we did when viscosity is larger than \(1/1000\): run Newton's iteration, refine the mesh, transfer solutions, and repeat.
When the viscosity is larger than 1/1000, the solution to Stokes equations is good enough as an initial guess. If so, we do not need to search for the initial guess using a continuation method. Newton's iteration can be started directly.
Now we use the method we discussed above to solve Navier Stokes equations with viscosity \(1/400\) and \(1/7500\).
In the first test case the viscosity is set to be \(1/400\). As we discussed in the introduction, the initial guess is the solution to the corresponding Stokes problem. In the following table, the residuals at each Newton's iteration on every mesh is shown. The data in the table shows that Newton's iteration converges quadratically.
\(\mathrm{Re}=400\) | Mesh0 | Mesh1 | Mesh2 | Mesh3 | Mesh4 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Newton iter | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES |
1 | 3.7112e-03 | 5 | 6.4189e-03 | 3 | 2.4338e-03 | 3 | 1.0570e-03 | 3 | 4.9499e-04 | 3 |
2 | 7.0849e-04 | 5 | 9.9458e-04 | 5 | 1.1409e-04 | 6 | 1.3544e-05 | 6 | 1.4171e-06 | 6 |
3 | 1.9980e-05 | 5 | 4.5007e-05 | 5 | 2.9020e-08 | 5 | 4.4021e-10 | 6 | 6.3435e-11 | 6 |
4 | 2.3165e-09 | 6 | 1.6891e-07 | 5 | 1.2338e-14 | 7 | 1.8506e-14 | 8 | 8.8563e-15 | 8 |
5 | 1.2585e-13 | 7 | 1.4520e-11 | 6 | 1.9044e-13 | 8 | ||||
6 | 1.3998e-15 | 8 |
The following figures show the sequence of generated grids. For the case of \(\mathrm{Re}=400\), the initial guess is obtained by solving Stokes on an \(8 \times 8\) mesh, and the mesh is refined adaptively. Between meshes, the solution from the coarse mesh is interpolated to the fine mesh to be used as an initial guess.
This picture is the graphical streamline result of lid-driven cavity with \(\mathrm{Re}=400\).
Then the solution is compared with a reference solution from [4] and the reference solution data can be found in the file "ref_2d_ghia_u.txt".
Newton's iteration requires a good initial guess. However, the nonlinear term dominates when the Reynolds number is large, so that the solution to the Stokes equations may be far away from the exact solution. If the Stokes solution acts as the initial guess, the convergence will be lost. The following picture shows that the nonlinear iteration gets stuck and the residual no longer decreases in further iterations.
The initial guess, therefore, has to be obtained via a continuation method which has been discussed in the introduction. Here the step size in the continuation method, that is \(|\nu_{i}-\nu_{i+1}|\), is 2000 and the initial mesh is of size \(32 \times 32\). After obtaining an initial guess, the mesh is refined as in the previous test case. The following picture shows that at each refinement Newton's iteration has quadratic convergence. 52 steps of Newton's iterations are executed for solving this test case.
We also show the residual from each step of Newton's iteration on every mesh. The quadratic convergence is clearly visible in the table.
\(\mathrm{Re}=7500\) | Mesh0 | Mesh1 | Mesh2 | Mesh3 | Mesh4 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Newton iter | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES | Residual | FGMRES |
1 | 1.8922e-06 | 6 | 4.2506e-03 | 3 | 1.4299e-03 | 3 | 4.8793e-04 | 2 | 1.8998e-04 | 2 |
2 | 3.1644e-09 | 8 | 1.3732e-03 | 7 | 4.1506e-04 | 7 | 9.1119e-05 | 8 | 1.3555e-05 | 8 |
3 | 1.7611e-14 | 9 | 2.1946e-04 | 6 | 1.7881e-05 | 6 | 5.2678e-07 | 7 | 9.3739e-09 | 7 |
4 | 8.8269e-06 | 6 | 6.8210e-09 | 7 | 2.2770e-11 | 8 | 1.2588e-13 | 9 | ||
5 | 1.2974e-07 | 7 | 1.2515e-13 | 9 | 1.7801e-14 | 1 | ||||
6 | 4.4352e-11 | 7 | ||||||||
7 | 6.2863e-15 | 9 |
The sequence of generated grids looks like this:
We compare our solution with reference solution from [5]. The following picture presents the graphical result.
Furthermore, the error consists of the nonlinear error, which decreases as we perform Newton iterations, and the discretization error, which depends on the mesh size. That is why we have to refine the mesh and repeat Newton's iteration on the next finer mesh. From the table above, we can see that the residual (nonlinear error) is below 1e-12 on each mesh, but the following picture shows us the difference between solutions on subsequently finer meshes.
It is easy to compare the currently implemented linear solver to just using UMFPACK for the whole linear system. You need to remove the nullspace containing the constant pressures and it is done in step-56. More interesting is the comparison to other state of the art preconditioners like PCD. It turns out that the preconditioner here is very competitive, as can be seen in the paper [2].
The following table shows the timing results between our iterative approach (FGMRES) compared to a direct solver (UMFPACK) for the whole system with viscosity set to 1/400. Even though we use the same direct solver for the velocity block in the iterative solver, it is considerably faster and consumes less memory. This will be even more pronounced in 3d.
Refinement Cycle | DoFs | Iterative: Total/s (Setup/s) | Direct: Total/s (Setup/s) |
---|---|---|---|
5 | 9539 | 0.10 (0.06) | 0.13 (0.12) |
6 | 37507 | 0.58 (0.37) | 1.03 (0.97) |
7 | 148739 | 3.59 (2.73) | 7.78 (7.53) |
8 | 592387 | 29.17 (24.94) | (>4GB RAM) |
The code is set up to also run in 3d. Of course the reference values are different, see [6] for example. High resolution computations are not doable with this example as is, because a direct solver for the velocity block does not work well in 3d. Rather, a parallel solver based on algebraic or geometric multigrid is needed. See below.
For larger computations, especially in 3d, it is necessary to implement MPI parallel solvers and preconditioners. A good starting point would be step-55, which uses algebraic multigrid for the velocity block for the Stokes equations. Another option would be to take a look at the list of codes in the deal.II code gallery, which already contains parallel Navier-Stokes solvers.