In this program, we use the interior penalty method and Nitsche's weak boundary conditions to solve Poisson's equation. We use multigrid methods on locally refined meshes, which are generated using a bulk criterion and a standard error estimator based on cell and face residuals. All operators are implemented using the MeshWorker interface.
Like in step-12, the discretization relies on finite element spaces, which are polynomial inside the mesh cells \(K\in \mathbb T_h\), but have no continuity between cells. Since such functions have two values on each interior face \(F\in \mathbb F_h^i\), one from each side, we define mean value and jump operators as follows: let K1 and K2 be the two cells sharing a face, and let the traces of functions ui and the outer normal vectors ni be labeled accordingly. Then, on the face, we let
\[ \{\!\{ u \}\!\} = \frac{u_1 + u_2}2 \]
Note, that if such an expression contains a normal vector, the averaging operator turns into a jump. The interior penalty method for the problem
\[ -\Delta u = f \text{ in }\Omega \qquad u = u^D \text{ on } \partial\Omega \]
becomes
\begin{multline*} \sum_{K\in \mathbb T_h} (\nabla u, \nabla v)_K \\ + \sum_{F \in F_h^i} \biggl\{4\sigma_F (\{\!\{ u \mathbf n\}\!\}, \{\!\{ v \mathbf n \}\!\})_F - 2 (\{\!\{ \nabla u \}\!\},\{\!\{ v\mathbf n \}\!\})_F - 2 (\{\!\{ \nabla v \}\!\},\{\!\{ u\mathbf n \}\!\})_F \biggr\} \\ + \sum_{F \in F_h^b} \biggl\{2\sigma_F (u, v)_F - (\partial_n u,v)_F - (\partial_n v,u)_F \biggr\} \\ = (f, v)_\Omega + \sum_{F \in F_h^b} \biggl\{ 2\sigma_F (u^D, v)_F - (\partial_n v,u^D)_F \biggr\}. \end{multline*}
Here, \(\sigma_F\) is the penalty parameter, which is chosen as follows: for a face F of a cell K, compute the value
where p is the polynomial degree of the finite element functions and \(|\cdot|_d\) and \(|\cdot|_{d-1}\) denote the \(d\) and \(d-1\) dimensional Hausdorff measure of the corresponding object. If the face is at the boundary, choose \(\sigma_F = \sigma_{F,K}\). For an interior face, we take the average of the two values at this face.
In our finite element program, we distinguish three different integrals, corresponding to the sums over cells, interior faces and boundary faces above. Since the MeshWorker::loop organizes the sums for us, we only need to implement the integrals over each mesh element. The class MatrixIntegrator below has these three functions for the left hand side of the formula, the class RHSIntegrator for the right.
As we will see below, even the error estimate is of the same structure, since it can be written as
Thus, the functions for assembling matrices, right hand side and error estimates below exhibit that these loops are all generic and can be programmed in the same way.
This program is related to step-12, in that it uses MeshWorker and discontinuous Galerkin methods. While there, we solved an advection problem, here it is a diffusion problem. Here, we also use multigrid preconditioning and a theoretically justified error estimator, see Karakashian and Pascal (2003). The multilevel scheme was discussed in detail in Kanschat (2004). The adaptive iteration and its convergence have been discussed (for triangular meshes) in Hoppe, Kanschat, and Warburton (2009).
The commented program
The include files for the linear algebra: A regular SparseMatrix, which in turn will include the necessary files for SparsityPattern and Vector classes.
MeshWorker separates local integration from the loops over cells and faces. Thus, we have to write local integration classes for generating matrices, the right hand side and the error estimator.
All these classes have the same three functions for integrating over cells, boundary faces and interior faces, respectively. All the information needed for the local integration is provided by MeshWorker::IntegrationInfo<dim>. Note that the signature of the functions cannot be changed, because it is expected by MeshWorker::integration_loop().
The first class defining local integrators is responsible for computing cell and face matrices. It is used to assemble the global matrix as well as the level matrices.
On each cell, we integrate the Dirichlet form. We use the library of ready made integrals in LocalIntegrators to avoid writing these loops ourselves. Similarly, we implement Nitsche boundary conditions and the interior penalty fluxes between cells.
The boundary and flux terms need a penalty parameter, which should be adjusted to the cell size and the polynomial degree. A safe choice of this parameter for constant coefficients can be found in LocalIntegrators::Laplace::compute_penalty() and we use this below.
The second local integrator builds the right hand side. In our example, the right hand side function is zero, such that only the boundary condition is set here in weak form.
The third local integrator is responsible for the contributions to the error estimate. This is the standard energy estimator due to Karakashian and Pascal (2003).
At the boundary, we use simply a weighted form of the boundary residual, namely the norm of the difference between the finite element solution and the correct boundary condition.
Finally we have an integrator for the error. Since the energy norm for discontinuous Galerkin problems not only involves the difference of the gradient inside the cells, but also the jump terms across faces and at the boundary, we cannot just use VectorTools::integrate_difference(). Instead, we use the MeshWorker interface to compute the error ourselves.
There are several different ways to define this energy norm, but all of them are equivalent to each other uniformly with mesh size (some not uniformly with polynomial degree). Here, we choose
Here we have the integration on cells. There is currently no good interface in MeshWorker that would allow us to access values of regular functions in the quadrature points. Thus, we have to create the vectors for the exact function's values and gradients inside the cell integrator. After that, everything is as before and we just add up the squares of the differences.
Additionally to computing the error in the energy norm, we use the capability of the mesh worker to compute two functionals at the same time and compute the L2-error in the same loop. Obviously, this one does not have any jump terms and only appears in the integration on cells.
Finally, we have a group of sparsity patterns and sparse matrices related to the multilevel preconditioner. First, we have a level matrix and its sparsity pattern.
When we perform multigrid with local smoothing on locally refined meshes, additional matrices are required; see Kanschat (2004). Here is the sparsity pattern for these edge matrices. We only need one, because the pattern of the up matrix is the transpose of that of the down matrix. Actually, we do not care too much about these details, since the MeshWorker is filling these matrices.
Next, we set up the sparsity pattern for the global matrix. Since we do not know the row sizes in advance, we first fill a temporary DynamicSparsityPattern object and copy it to the regular SparsityPattern once it is complete.
The global system is set up, now we attend to the level matrices. We resize all matrix objects to hold one matrix per level.
mg_matrix.resize(0, n_levels - 1);
mg_matrix.clear_elements();
mg_matrix_dg_up.resize(0, n_levels - 1);
mg_matrix_dg_up.clear_elements();
mg_matrix_dg_down.resize(0, n_levels - 1);
mg_matrix_dg_down.clear_elements();
It is important to update the sparsity patterns after clear() was called for the level matrices, since the matrices lock the sparsity pattern through the SmartPointer and Subscriptor mechanism.
mg_sparsity.resize(0, n_levels - 1);
mg_sparsity_dg_interface.resize(0, n_levels - 1);
Now all objects are prepared to hold one sparsity pattern or matrix per level. What's left is setting up the sparsity patterns on each level.
for (unsignedint level = mg_sparsity.min_level();
level <= mg_sparsity.max_level();
++level)
{
These are roughly the same lines as above for the global matrix, now for each level.
Additionally, we need to initialize the transfer matrices at the refinement edge between levels. They are stored at the index referring to the finer of the two indices, thus there is no such object on level 0.
In this function, we assemble the global system matrix, where by global we indicate that this is the matrix of the discrete system we solve and it is covering the whole mesh.
First, we need t set up the object providing the values we integrate. This object contains all FEValues and FEFaceValues objects needed and also maintains them automatically such that they always point to the current cell. To this end, we need to tell it first, where and what to compute. Since we are not doing anything fancy, we can rely on their standard choice for quadrature rules.
Since their default update flags are minimal, we add what we need additionally, namely the values and gradients of shape functions on all objects (cells, boundary and interior faces). Afterwards, we are ready to initialize the container, which will create all necessary FEValuesBase objects for integration.
This is the object into which we integrate local data. It is filled by the local integration routines in MatrixIntegrator and then used by the assembler to distribute the information into the global matrix.
Furthermore, we need an object that assembles the local matrix into the global matrix. These assembler objects have all the knowledge of the structures of the target object, in this case a SparseMatrix, possible constraints and the mesh structure.
Now comes the part we coded ourselves, the local integrator. This is the only part which is problem dependent.
MatrixIntegrator<dim> integrator;
Now, we throw everything into a MeshWorker::loop(), which here traverses all active cells of the mesh, computes cell and face matrices and assembles them into the global matrix. We use the variable dof_handler here in order to use the global numbering of degrees of freedom.
Now, we do the same for the level matrices. Not too surprisingly, this function looks like a twin of the previous one. Indeed, there are only two minor differences.
Here is the other difference to the previous function: we run over all cells, not only the active ones. And we use functions ending on _mg since we need the degrees of freedom on each level, not the global numbering.
Since this assembler allows us to fill several vectors, the interface is a little more complicated as above. The pointers to the vectors have to be stored in an AnyData object. While this seems to cause two extra lines of code here, it actually comes handy in more complex applications.
Now we are setting up the components of the multilevel preconditioner. First, we need transfer between grid levels. The object we are using here generates sparse matrices for these transfers.
While transfer and coarse grid solver are pretty much generic, more flexibility is offered for the smoother. First, we choose Gauss-Seidel as our smoothing method.
Since the SOR method is not symmetric, but we use conjugate gradient iteration below, here is a trick to make the multilevel preconditioner a symmetric operator even for nonsymmetric smoothers.
Another clone of the assemble function. The big difference to the previous ones is here that we also have an input vector.
template <int dim>
double InteriorPenaltyProblem<dim>::estimate()
{
The results of the estimator are stored in a vector with one entry per cell. Since cells in deal.II are not numbered, we have to create our own numbering in order to use this vector. For the assembler used below the information in which component of a vector the result is stored is transmitted by the user_index variable for each cell. We need to set this numbering up here.
On the other hand, somebody might have used the user indices already. So, let's be good citizens and save them before tampering with them.
but now we need to notify the info box of the finite element function we want to evaluate in the quadrature points. First, we create an AnyData object with this vector, which is the solution we just computed.
Then, we tell the Meshworker::VectorSelector for cells, that we need the second derivatives of this solution (to compute the Laplacian). Therefore, the Boolean arguments selecting function values and first derivatives a false, only the last one selecting second derivatives is true.
Here we compare our finite element solution with the (known) exact solution and compute the mean quadratic error of the gradient and the function itself. This function is a clone of the estimation function right above.
Since we compute the error in the energy and the L2-norm, respectively, our block vector needs two blocks here.
Create graphical output. We produce the filename by collating the name from its various components, including the refinement cycle that we output with two digits.
This log for instance shows that the number of conjugate gradient iteration steps is constant at approximately 15.
Postprocessing of the logfile
Using the perl script postprocess.pl, we extract relevant data into output.dat, which can be used to plot graphs with gnuplot. The graph above for instance was produced using the gnuplot script plot_errors.gpl via