Reference documentation for deal.II version 8.5.1
|
Table of contents | |
---|---|
The main emphasis in this example is the handling of locally refined grids. The approach to adaptivity chosen in deal.II is to use grids in which neighboring cells may be refined a different number of times. This then results in nodes on the interfaces of cells which belong to one side, but are unbalanced on the other. The common term for these is “hanging nodes”.
To guarantee that the global solution is continuous at these nodes as well, we have to state some additional constraints on the values of the solution at these nodes. In the program below, we will show how we can get these constraints from the library, and how to use them in the solution of the linear system of equations. Before going over the details of the program below, you may want to take a look at the Constraints on degrees of freedom documentation module that explains how these constraints can be computed and what classes in deal.II work on them.
The locally refined grids are produced using an error estimator class which estimates the energy error with respect to the Laplace operator. This error estimator, although developed for Laplace's equation has proven to be a suitable tool to generate locally refined meshes for a wide range of equations, not restricted to elliptic problems. Although it will create non-optimal meshes for other equations, it is often a good way to quickly produce meshes that are well adapted to the features of solutions, such as regions of great variation or discontinuities. Since it was developed by Kelly and co-workers, we often refer to it as the “Kelly refinement indicator” in the library, documentation, and mailing list. The class that implements it is called KellyErrorEstimator. Although the error estimator (and its implementation in the deal.II library) is capable of handling variable coefficients in the equation, we will not use this feature since we are only interested in a quick and simple way to generate locally refined grids.
Since the concepts used for locally refined grids are so important, we do not show much additional new stuff in this example. The most important exception is that we show how to use biquadratic elements instead of the bilinear ones which we have used in all previous examples. In fact, The use of higher order elements is accomplished by only replacing three lines of the program, namely the declaration of the fe
variable, and the use of an appropriate quadrature formula in two places. The rest of the program is unchanged.
The only other new thing is a method to catch exceptions in the main
function in order to output some information in case the program crashes for some reason.
As explained above, we are going to use an object called ConstraintMatrix that will store the constraints at the hanging nodes to insure the solution is continuous at these nodes. We could first assemble the system as normal and then condense out the degrees of freedom that need to be constrained. This is also explained Constraints on degrees of freedom documentation module. Instead we will go the more efficient route and eliminate the constrained entries while we are copying from the local to the global system. Because boundary conditions can be treated in the same way, we will incorporate the them as constraints in the same ConstraintMatrix object. This way, we don't need to apply the boundary conditions after assembly (like we did in the earlier steps).
The first few files have already been covered in previous examples and will thus not be further commented on.
From the following include file we will import the declaration of H1-conforming finite element shape functions. This family of finite elements is called FE_Q
, and was used in all examples before already to define the usual bi- or tri-linear elements, but we will now use it for bi-quadratic elements:
We will not read the grid from a file as in the previous example, but generate it using a function of the library. However, we will want to write out the locally refined grids (just the grid, not the solution) in each step, so we need the following include file instead of grid_in.h
:
When using locally refined grids, we will get so-called hanging nodes
. However, the standard finite element methods assumes that the discrete solution spaces be continuous, so we need to make sure that the degrees of freedom on hanging nodes conform to some constraints such that the global solution is continuous. We are also going to store the boundary conditions in this object. The following file contains a class which is used to handle these constraints:
In order to refine our grids locally, we need a function from the library that decides which cells to flag for refinement or coarsening based on the error indicators we have computed. This function is defined here:
Finally, we need a simple way to actually compute the refinement indicators based on some error estimate. While in general, adaptivity is very problem-specific, the error indicator in the following file often yields quite nicely adapted grids for a wide class of problems.
Finally, this is as in previous programs:
Step6
class templateThe main class is again almost unchanged. Two additions, however, are made: we have added the refine_grid
function, which is used to adaptively refine the grid (instead of the global refinement in the previous examples), and a variable which will hold the constraints. In addition, we have added a destructor to the class for reasons that will become clear when we discuss its implementation.
This is the new variable in the main class. We need an object which holds a list of constraints to hold the hanging nodes and the boundary conditions.
The implementation of nonconstant coefficients is copied verbatim from step-5:
Step6
class implementationThe constructor of this class is mostly the same as before, but this time we want to use the quadratic element. To do so, we only have to replace the constructor argument (which was 1
in all previous examples) by the desired polynomial degree (here 2
):
Here comes the added destructor of the class. The reason why we want to add it is a subtle change in the order of data elements in the class as compared to all previous examples: the dof_handler
object was defined before and not after the fe
object. Of course we could have left this order unchanged, but we would like to show what happens if the order is reversed since this produces a rather nasty side-effect and results in an error which is difficult to track down if one does not know what happens.
Basically what happens is the following: when we distribute the degrees of freedom using the function call dof_handler.distribute_dofs()
, the dof_handler
also stores a pointer to the finite element in use. Since this pointer is used every now and then until either the degrees of freedom are re-distributed using another finite element object or until the dof_handler
object is destroyed, it would be unwise if we would allow the finite element object to be deleted before the dof_handler
object. To disallow this, the DoF handler increases a counter inside the finite element object which counts how many objects use that finite element (this is what the Subscriptor
/SmartPointer
class pair is used for, in case you want something like this for your own programs; see step-7 for a more complete discussion of this topic). The finite element object will refuse its destruction if that counter is larger than zero, since then some other objects might rely on the persistence of the finite element object. An exception will then be thrown and the program will usually abort upon the attempt to destroy the finite element.
To be fair, such exceptions about still used objects are not particularly popular among programmers using deal.II, since they only tell us that something is wrong, namely that some other object is still using the object that is presently being destructed, but most of the time not who this user is. It is therefore often rather time-consuming to find out where the problem exactly is, although it is then usually straightforward to remedy the situation. However, we believe that the effort to find invalid references to objects that do no longer exist is less if the problem is detected once the reference becomes invalid, rather than when non-existent objects are actually accessed again, since then usually only invalid data is accessed, but no error is immediately raised.
Coming back to the present situation, if we did not write this destructor, the compiler will generate code that triggers exactly the behavior sketched above. The reason is that member variables of the Step6
class are destructed bottom-up (i.e. in reverse order of their declaration in the class), as always in C++. Thus, the finite element object will be destructed before the DoF handler object, since its declaration is below the one of the DoF handler. This triggers the situation above, and an exception will be raised when the fe
object is destructed. What needs to be done is to tell the dof_handler
object to release its lock to the finite element. Of course, the dof_handler
will only release its lock if it really does not need the finite element any more, i.e. when all finite element related data is deleted from it. For this purpose, the DoFHandler
class has a function clear
which deletes all degrees of freedom, and releases its lock to the finite element. After this, you can safely destruct the finite element object since its internal counter is then zero.
For completeness, we add the output of the exception that would have been triggered without this destructor, to the end of the results section of this example.
The next function is setting up all the variables that describe the linear finite element problem, such as the DoF handler, the matrices, and vectors. The difference to what we did in step-5 is only that we now also have to take care of hanging node constraints. These constraints are handled almost transparently by the library, i.e. you only need to know that they exist and how to get them, but you do not have to know how they are formed or what exactly is done with them.
At the beginning of the function, you find all the things that are the same as in step-5: setting up the degrees of freedom (this time we have quadratic elements, but there is no difference from a user code perspective to the linear – or cubic, for that matter – case), generating the sparsity pattern, and initializing the solution and right hand side vectors. Note that the sparsity pattern will have significantly more entries per row now, since there are now 9 degrees of freedom per cell, not only four, that can couple with each other.
After setting up all the degrees of freedoms, here are now the differences compared to step-5, all of which are related to constraints associated with the hanging nodes. In the class declaration, we have already allocated space for an object constraints
that will hold a list of these constraints (they form a matrix, which is reflected in the name of the class, but that is immaterial for the moment). Now we have to fill this object. This is done using the following function calls (the first clears the contents of the object that may still be left over from computations on the previous mesh before the last adaptive refinement):
Now we are ready to interpolate the ZeroFunction to our boundary with indicator 0 (the whole boundary) and store the resulting constraints in our constraints
object. Note that we do not to apply the boundary conditions after assembly, like we did in earlier steps. As almost all the stuff, the interpolation of boundary values works also for higher order elements without the need to change your code for that. We note that for proper results, it is important that the elimination of boundary nodes from the system of equations happens after the elimination of hanging nodes. For that reason we are filling the boundary values into the ContraintMatrix after the hanging node constraints.
The next step is closing
this object. After all constraints have been added, they need to be sorted and rearranged to perform some actions more efficiently. This postprocessing is done using the close()
function, after which no further constraints may be added any more:
Now we first build our compressed sparsity pattern like we did in the previous examples. Nevertheless, we do not copy it to the final sparsity pattern immediately. Note that we call a variant of make_sparsity_pattern that takes the ConstraintMatrix as the third argument. We are letting the routine know that we will never write into the locations given by constraints
by setting the argument keep_constrained_dofs
to false (in other words, that we will never write into entries of the matrix that correspond to constrained degrees of freedom). If we were to condense the constraints after assembling, we would have to pass true
instead because then we would first write into these locations only to later set them to zero again during condensation.
Now all non-zero entries of the matrix are known (i.e. those from regularly assembling the matrix and those that were introduced by eliminating constraints). We can thus copy our intermediate object to the sparsity pattern:
Finally, the so-constructed sparsity pattern serves as the basis on top of which we will create the sparse matrix:
Next, we have to assemble the matrix again. There are two code changes compared to step-5:
First, we have to use a higher-order quadrature formula to account for the higher polynomial degree in the finite element shape functions. This is easy to change: the constructor of the QGauss
class takes the number of quadrature points in each space direction. Previously, we had two points for bilinear elements. Now we should use three points for biquadratic elements.
Second, to copy the local matrix and vector on each cell into the global system, we are no longer using a hand-written loop. Instead, we use ConstraintMatrix::distribute_local_to_global
that internally executes this loop and eliminates all the constraints at the same time.
The rest of the code that forms the local contributions remains unchanged. It is worth noting, however, that under the hood several things are different than before. First, the variables dofs_per_cell
and n_q_points
now are 9 each, where they were 4 before. Introducing such variables as abbreviations is a good strategy to make code work with different elements without having to change too much code. Secondly, the fe_values
object of course needs to do other things as well, since the shape functions are now quadratic, rather than linear, in each coordinate variable. Again, however, this is something that is completely transparent to user code and nothing that you have to worry about.
Finally, transfer the contributions from cell_matrix
and cell_rhs
into the global objects.
Now we are done assembling the linear system. The constraint matrix took care of applying the boundary conditions and also eliminated hanging node constraints. The constrained nodes are still in the linear system (there is a one on the diagonal of the matrix and all other entries for this line are set to zero) but the computed values are invalid. We compute the correct values for these nodes at the end of the solve
function.
We continue with gradual improvements. The function that solves the linear system again uses the SSOR preconditioner, and is again unchanged except that we have to incorporate hanging node constraints. As mentioned above, the degrees of freedom from the ConstraintMatrix corresponding to hanging node constraints and boundary values have been removed from the linear system by giving the rows and columns of the matrix a special treatment. This way, the values for these degrees of freedom have wrong, but well-defined values after solving the linear system. What we then have to do is to use the constraints to assign to them the values that they should have. This process, called distributing
constraints, computes the values of constrained nodes from the values of the unconstrained ones, and requires only a single additional function call that you find at the end of this function:
Instead of global refinement, we now use a slightly more elaborate scheme. We will use the KellyErrorEstimator
class which implements an error estimator for the Laplace equation; it can in principle handle variable coefficients, but we will not use these advanced features, but rather use its most simple form since we are not interested in quantitative results but only in a quick way to generate locally refined grids.
Although the error estimator derived by Kelly et al. was originally developed for the Laplace equation, we have found that it is also well suited to quickly generate locally refined grids for a wide class of problems. Basically, it looks at the jumps of the gradients of the solution over the faces of cells (which is a measure for the second derivatives) and scales it by the size of the cell. It is therefore a measure for the local smoothness of the solution at the place of each cell and it is thus understandable that it yields reasonable grids also for hyperbolic transport problems or the wave equation as well, although these grids are certainly suboptimal compared to approaches specially tailored to the problem. This error estimator may therefore be understood as a quick way to test an adaptive program.
The way the estimator works is to take a DoFHandler
object describing the degrees of freedom and a vector of values for each degree of freedom as input and compute a single indicator value for each active cell of the triangulation (i.e. one value for each of the triangulation.n_active_cells()
cells). To do so, it needs two additional pieces of information: a quadrature formula on the faces (i.e. quadrature formula on dim-1
dimensional objects. We use a 3-point Gauss rule again, a pick that is consistent and appropriate with the choice bi-quadratic finite element shape functions in this program. (What constitutes a suitable quadrature rule here of course depends on knowledge of the way the error estimator evaluates the solution field. As said above, the jump of the gradient is integrated over each face, which would be a quadratic function on each face for the quadratic elements in use in this example. In fact, however, it is the square of the jump of the gradient, as explained in the documentation of that class, and that is a quartic function, for which a 3 point Gauss formula is sufficient since it integrates polynomials up to order 5 exactly.)
Secondly, the function wants a list of boundary indicators for those boundaries where we have imposed Neumann values of the kind \(\partial_n u(\mathbf x) = h(\mathbf x)\), along with a function \(h(\mathbf x)\) for each such boundary. This information is represented by an object of type FunctionMap::type
that is a typedef to a map from boundary indicators to function objects describing the Neumann boundary values. In the present example program, we do not use Neumann boundary values, so this map is empty, and in fact constructed using the default constructor of the map in the place where the function call expects the respective function argument.
The output, as mentioned is a vector of values for all cells. While it may make sense to compute the value of a solution degree of freedom very accurately, it is usually not necessary to compute the error indicator corresponding to the solution on a cell particularly accurately. We therefore typically use a vector of floats instead of a vector of doubles to represent error indicators.
The above function returned one error indicator value for each cell in the estimated_error_per_cell
array. Refinement is now done as follows: refine those 30 per cent of the cells with the highest error values, and coarsen the 3 per cent of cells with the lowest values.
One can easily verify that if the second number were zero, this would approximately result in a doubling of cells in each step in two space dimensions, since for each of the 30 per cent of cells, four new would be replaced, while the remaining 70 per cent of cells remain untouched. In practice, some more cells are usually produced since it is disallowed that a cell is refined twice while the neighbor cell is not refined; in that case, the neighbor cell would be refined as well.
In many applications, the number of cells to be coarsened would be set to something larger than only three per cent. A non-zero value is useful especially if for some reason the initial (coarse) grid is already rather refined. In that case, it might be necessary to refine it in some regions, while coarsening in some other regions is useful. In our case here, the initial grid is very coarse, so coarsening is only necessary in a few regions where over-refinement may have taken place. Thus a small, non-zero value is appropriate here.
The following function now takes these refinement indicators and flags some cells of the triangulation for refinement or coarsening using the method described above. It is from a class that implements several different algorithms to refine a triangulation based on cell-wise error indicators.
After the previous function has exited, some cells are flagged for refinement, and some other for coarsening. The refinement or coarsening itself is not performed by now, however, since there are cases where further modifications of these flags is useful. Here, we don't want to do any such thing, so we can tell the triangulation to perform the actions for which the cells are flagged:
At the end of computations on each grid, and just before we continue the next cycle with mesh refinement, we want to output the results from this cycle.
In the present program, we will not write the solution (except for in the last step, see the next function), but only the meshes that we generated, as a two-dimensional Encapsulated Postscript (EPS) file.
We have already seen in step-1 how this can be achieved. The only thing we have to change is the generation of the file name, since it should contain the number of the present refinement cycle provided to this function as an argument. The most general way is to use the std::stringstream class as shown in step-5, but here's a little hack that makes it simpler if we know that we have less than 10 iterations: assume that the numbers ‘0’ through ‘9’ are represented consecutively in the character set used on your machine (this is in fact the case in all known character sets), then '0'+cycle gives the character corresponding to the present cycle number. Of course, this will only work if the number of cycles is actually less than 10, and rather than waiting for the disaster to happen, we safeguard our little hack with an explicit assertion at the beginning of the function. If this assertion is triggered, i.e. when cycle
is larger than or equal to 10, an exception of type ExcNotImplemented
is raised, indicating that some functionality is not implemented for this case (the functionality that is missing, of course, is the generation of file names for that case):
The final function before main()
is again the main driver of the class, run()
. It is similar to the one of step-5, except that we generate a file in the program again instead of reading it from disk, in that we adaptively instead of globally refine the mesh, and that we output the solution on the final mesh in the present function.
The first block in the main loop of the function deals with mesh generation. If this is the first cycle of the program, instead of reading the grid from a file on disk as in the previous example, we now again create it using a library function. The domain is again a circle, which is why we have to provide a suitable boundary object as well. We place the center of the circle at the origin and have the radius be one (these are the two hidden arguments to the function, which have default values).
You will notice by looking at the coarse grid that it is of inferior quality than the one which we read from the file in the previous example: the cells are less equally formed. However, using the library function this program works in any space dimension, which was not the case before.
In case we find that this is not the first cycle, we want to refine the grid. Unlike the global refinement employed in the last example program, we now use the adaptive procedure described above.
The rest of the loop looks as before:
After we have finished computing the solution on the finest mesh, and writing all the grids to disk, we want to also write the actual solution on this final mesh to a file. As already done in one of the previous examples, we use the EPS format for output, and to obtain a reasonable view on the solution, we rescale the z-axis by a factor of four.
main
functionThe main function is unaltered in its functionality from the previous example, but we have taken a step of additional caution. Sometimes, something goes wrong (such as insufficient disk space upon writing an output file, not enough memory when trying to allocate a vector or a matrix, or if we can't read from or write to a file for whatever reason), and in these cases the library will throw exceptions. Since these are run-time problems, not programming errors that can be fixed once and for all, this kind of exceptions is not switched off in optimized mode, in contrast to the Assert
macro which we have used to test against programming errors. If uncaught, these exceptions propagate the call tree up to the main
function, and if they are not caught there either, the program is aborted. In many cases, like if there is not enough memory or disk space, we can't do anything but we can at least print some text trying to explain the reason why the program failed. A way to do so is shown in the following. It is certainly useful to write any larger program in this way, and you can do so by more or less copying this function except for the try
block that actually encodes the functionality particular to the present application.
The general idea behind the layout of this function is as follows: let's try to run the program as we did before...
...and if this should fail, try to gather as much information as possible. Specifically, if the exception that was thrown is an object of a class that is derived from the C++ standard class exception
, then we can use the what
member function to get a string which describes the reason why the exception was thrown.
The deal.II exception classes are all derived from the standard class, and in particular, the exc.what()
function will return approximately the same string as would be generated if the exception was thrown using the Assert
macro. You have seen the output of such an exception in the previous example, and you then know that it contains the file and line number of where the exception occurred, and some other information. This is also what the following statements would print.
Apart from this, there isn't much that we can do except exiting the program with an error code (this is what the return 1;
does):
If the exception that was thrown somewhere was not an object of a class derived from the standard exception
class, then we can't do anything at all. We then simply print an error message and exit.
If we got to this point, there was no exception which propagated up to the main function (there may have been exceptions, but they were caught somewhere in the program or the library). Therefore, the program performed as was expected and we can return without error.
The output of the program looks as follows:
As intended, the number of cells roughly doubles in each cycle. The number of degrees is slightly more than four times the number of cells; one would expect a factor of exactly four in two spatial dimensions on an infinite grid (since the spacing between the degrees of freedom is half the cell width: one additional degree of freedom on each edge and one in the middle of each cell), but it is larger than that factor due to the finite size of the mesh and due to additional degrees of freedom which are introduced by hanging nodes and local refinement.
The final solution, as written by the program at the end of the run()
function, looks as follows:
In each cycle, the program furthermore writes the grid in EPS format. These are shown in the following:
It is clearly visible that the region where the solution has a kink, i.e. the circle at radial distance 0.5 from the center, is refined most. Furthermore, the central region where the solution is very smooth and almost flat, is almost not refined at all, but this results from the fact that we did not take into account that the coefficient is large there. The region outside is refined rather randomly, since the second derivative is constant there and refinement is therefore mostly based on the size of the cells and their deviation from the optimal square.
For completeness, we show what happens if the code we commented about in the destructor of the Step6
class is omitted from this example.
From the above error message, we conclude that something is still using an object with type N6dealii4FE_QILi2EE
. This is of course the "mangled" name for FE_Q
. The mangling works as follows: the N6
indicates a namespace with six characters (i.e., dealii
) and the 4
indicates the number of characters of the template class (i.e., FE_Q
).
The rest of the text is then template arguments. From this we can already glean a little bit who's the culprit here, and who the victim: The one object that still uses the finite element is the dof_handler
object.
The stacktrace gives an indication of where the problem happened. We see that the exception was triggered in the destructor of the FiniteElement
class that was called through a few more functions from the destructor of the Step6
class, exactly where we have commented out the call to DoFHandler::clear()
.
One thing that is always worth playing around with if one solves problems of appreciable size (much bigger than the one we have here) is to try different solvers or preconditioners. In the current case, the linear system is symmetric and positive definite, which makes the CG algorithm pretty much the canonical choice for solving. However, the SSOR preconditioner we use in the solve()
function is up for grabs.
In deal.II, it is relatively simple to change the preconditioner. For example, by changing the existing lines of code
into
we can try out different relaxation parameters for SSOR. By using (you have to also add the header file lac/sparse_ilu.h
to the include list at the top of the file)
we can use Jacobi as a preconditioner. And by using
we can use a very simply incomplete LU decomposition without any thresholding or strengthening of the diagonal.
Using these various different preconditioners, we can compare the number of CG iterations needed (available through the solver_control.last_step()
call, see step-4) as well as CPU time needed (using the Timer class, discussed, for example, in step-12) and get the following results (left: iterations; right: CPU time):
As we can see, all preconditioners behave pretty much the same on this simple problem, with the number of iterations growing like \({\cal O}(N^{1/2})\) and because each iteration requires around \({\cal O}(N)\) operations the total CPU time grows like \({\cal O}(N^{3/2})\) (for the few smallest meshes, the CPU time is so small that it doesn't record). Note that even though it is the simplest method, Jacobi is the fastest for this problem.
The situation changes slightly when the finite element is not a bi-quadratic one as set in the constructor of this program, but a bi-linear one. If one makes this change, the results are as follows:
In other words, while the increase in iterations and CPU time is as before, Jacobi is now the method that requires the most iterations; it is still the fastest one, however, owing to the simplicity of the operations it has to perform. This is not to say that Jacobi is actually a good preconditioner – for problems of appreciable size, it is definitely not, and other methods will be substantially better – but really only that it is fast because its implementation is so simple that it can compensate for a larger number of iterations.
The message to take away from this is not that simplicity in preconditioners is always best. While this may be true for the current problem, it definitely is not once we move to more complicated problems (elasticity or Stokes, for examples step-8 or step-22). Secondly, all of these preconditioners still lead to an increase in the number of iterations as the number \(N\) of degrees of freedom grows, for example \({\cal O}(N^\alpha)\); this, in turn, leads to a total growth in effort as \({\cal O}(N^{1+\alpha})\) since each iteration takes \({\cal O}(N)\) work. This behavior is undesirable: we would really like to solve linear systems with \(N\) unknowns in a total of \({\cal O}(N)\) work; there is a class of preconditioners that can achieve this, namely geometric (step-16, step-37, step-39) or algebraic multigrid (step-31, step-40, and several others) preconditioners. They are, however, significantly more complex than the preconditioners outlined above.
Finally, the last message to take home is that when the data shown above was generated (in 2008), linear systems with 100,000 unknowns are easily solved on a desktop machine in well under 10 seconds, making the solution of relatively simple 2d problems even to very high accuracy not that big a task as it used to be even in the recent past. At the time, the situation for 3d problems was entirely different, but even that has changed substantially in the intervening time – though solving problems in 3d to high accuracy remains a challenge.
If you look at the meshes above, you will see even though the domain is the unit disk, and the jump in the coefficient lies along a circle, the cells that make up the mesh do not track this geometry well. The reason, already hinted at in step-1, is that by default the Triangulation class only sees a bunch of coarse grid cells but has, of course, no real idea what kind of geometry they might represent when looked at together. For this reason, we need to tell the Triangulation what to do when a cell is refined: where should the new vertices at the edge midpoints and the cell midpoint be located so that the child cells better represent the desired geometry than the parent cell.
In the code above, we already do this for faces that sit at the boundary: we use the code
to tell the Triangulation where to ask when refining a boundary face. To make the mesh interior also track a circular domain, we need to work a bit harder, though.
First, recall that our coarse mesh consists of a central square cell and four cells around it. Now first consider what would happen if we also attached the SphericalManifold object not only to the four exterior faces but also the four cells at the perimeter as well as all of their faces. We can do this by replacing the existing mesh creation code
by the following snippet (testing that the center of a cell is larger than a small multiple, say one tenth, of the cell diameter away from center of the mesh only fails for the central square of the mesh):
After a few global refinement steps, this would lead to a mesh of the following kind:
This is not a good mesh: the central cell has been refined in such a way that the children located in the four corners of the original central cell degenerate: they all tend towards triangles as mesh refinement continues. This means that the Jacobian matrix of the transformation from reference cell to actual cell degenerates for these cells, and because all error estimates for finite element solutions contain the norm of the inverse of the Jacobian matrix, you will get very large errors on these cells and, in the limit as mesh refinement, a loss of convergence order because the cells in these corners become worse and worse under mesh refinement.
So we need something smarter. To this end, consider the following solution originally developed by Konstantin Ladutenko. We will use the following code:
This code then generates the following, much better sequence of meshes:
Creating good meshes, and in particular making them fit the geometry you want, is a complex topic in itself. You can find much more on this in step-49, step-53, and step-54, among other tutorial programs that cover the issue. Information on curved domains can also be found in the documentation module on Manifold descriptions.
From a mathematical perspective, solutions of the Laplace equation
\[ -\Delta u = f \]
on smoothly bounded, convex domains are known to be smooth themselves. The exact degree of smoothness, i.e., the function space in which the solution lives, depends on how smooth exactly the boundary of the domain is, and how smooth the right hand side is. Some regularity of the solution may be lost at the boundary, but one generally has that the solution is twice more differentiable in compact subsets of the domain than the right hand side. If, in particular, the right hand side satisfies \(f\in C^\infty(\Omega)\), then
\(u \in C^\infty(\Omega_i)\) where \(\Omega_i\) is any compact subset of \(\Omega\) ( \(\Omega\) is an open domain, so a compact subset needs to keep a positive distance from \(\partial\Omega\)).
The situation we chose for the current example is different, however: we look at an equation with a non-constant coefficient \(a(\mathbf x)\):
\[ -\nabla \cdot (a \nabla u) = f. \]
Here, if \(a\) is not smooth, then the solution will not be smooth either, regardless of \(f\). In particular, we expect that wherever \(a\) is discontinuous along a line (or along a plane in 3d), the solution will have a kink. This is easy to see: if for example \(f\) is continuous, then \(f=-\nabla \cdot (a \nabla u)\) needs to be continuous. This means that \(a \nabla u\) must be continuously differentiable (not have a kink). Consequently, if \(a\) has a discontinuity, then \(\nabla u\) must have an opposite discontinuity so that the two exactly cancel and their product yields a function without a discontinuity. But for \(\nabla u\) to have a discontinuity, \(u\) must have a kink. This is of course exactly what is happening in the current example, and easy to observe in the pictures of the solution.
In general, if the coefficient \(a(\mathbf x)\) is discontinuous along a line in 2d, or a plane in 3d, then the solution may have a kink, but the gradient of the solution will not go to infinity. That means, that the solution is at least still in the space \(W^{1,\infty}\). On the other hand, we know that in the most extreme cases – i.e., where the domain has reentrant corners, the right hand side only satisfies \(f\in H^{-1}\), or the coefficient \(a\) is only in \(L^\infty\) – all we can expect is that \(u\in H^1\), a much larger space than \(W^{1,\infty}\). It is not very difficult to create cases where the solution is in a space \(H^{1+s}\) where we can get \(s\) to become as small as we want. Such cases are often used to test adaptive finite element methods because the mesh will have to resolve the singularity that causes the solution to not be in \(W^{1,\infty}\) any more.
The typical example one uses for this is called the Kellogg problem (referring to the paper "On the Poisson equation with intersecting interfaces" by R. B. Kellogg, Applicable Analysis, vol. 4, pp. 101-129, 1974), which in the commonly used form has a coefficient \(a(\mathbf x)\) that has different values in the four quadrants of the plane (or eight different values in the octants of \({\mathbb R}^3\)). The exact degree of regularity (the \(s\) in the index of the Sobolev space above) depends on the values of \(a(\mathbf x)\) coming together at the origin, and by choosing the jumps large enough, the regularity of the solution can be made as close as desired to \(H^1\).
To implement something like this, one could replace the coefficient function by the following (shown here only for the 2d case):
(Adding the Assert
at the end ensures that an exception is thrown if we ever get to that point – which of course we shouldn't, but this is a good way to insure yourself: we all make mistakes by sometimes not thinking of all cases, for example by checking for p[0]
to be less than and greater than zero, rather than greater-or-equal to zero, and thereby forgetting some cases that would otherwise lead to bugs that are awkward to find. The return 0;
at the end is only there to avoid compiler warnings that the function does not end in a return
statement – the compiler cannot see that the function would never actually get to that point because of the preceding Assert
statement.)
By playing with such cases where four or more sectors come together and on which the coefficient has different values, one can construct cases where the solution has singularities at the origin. One can also see how the meshes are refined in such cases.