Reference documentation for deal.II version 9.0.0
|
This tutorial depends on step-4.
Table of contents | |
---|---|
This example does not show revolutionary new things, but it shows many small improvements over the previous examples, and also many small things that can usually be found in finite element programs. Among them are:
The equation to solve here is as follows:
\begin{align*} -\nabla \cdot a(\mathbf x) \nabla u(\mathbf x) &= 1 \qquad\qquad & \text{in}\ \Omega, \\ u &= 0 \qquad\qquad & \text{on}\ \partial\Omega. \end{align*}
If \(a(\mathbf x)\) was a constant coefficient, this would simply be the Poisson equation. However, if it is indeed spatially variable, it is a more complex equation (often referred to as the "extended Poisson equation"). Depending on what the variable \(u\) refers to it models a variety of situations with wide applicability:
Since the Laplace/Poisson equation appears in so many contexts, there are many more interpretations than just the two listed above.
When assembling the linear system for this equation, we need the weak form which here reads as follows:
\begin{align*} (a \nabla \varphi, \nabla u) &= (\varphi, 1) \qquad \qquad \forall \varphi. \end{align*}
The implementation in the assemble_system
function follows immediately from this.
Again, the first few include files are already known, so we won't comment on them:
This one is new. We want to read a triangulation from disk, and the class which does this is declared in the following file :
We will use a circular domain, and the object describing the boundary of it comes from this file :
This is C++ ...
Finally, this has been discussed in previous tutorial programs before:
Step5
class templateThe main class is mostly as in the previous example. The most visible change is that the function make_grid_and_dofs
has been removed, since creating the grid is now done in the run
function and the rest of its functionality is now in setup_system
. Apart from this, everything is as before.
In step-4, we showed how to use non-constant boundary values and right hand side. In this example, we want to use a variable coefficient in the elliptic operator instead. Since we have a function which just depends on the point in space we can do things a bit more simply and use a plain function instead of inheriting from Function.
This is the implementation of the coefficient function for a single point. We let it return 20 if the distance to the origin is less than 0.5, and 1 otherwise.
Step5
class implementationThis function is as before.
This is the function make_grid_and_dofs
from the previous example, minus the generation of the grid. Everything else is unchanged:
As in the previous examples, this function is not changed much with regard to its functionality, but there are still some optimizations which we will show. For this, it is important to note that if efficient solvers are used (such as the preconditioned CG method), assembling the matrix and right hand side can take a comparable time, and you should think about using one or two optimizations at some places.
What we will show here is how we can avoid calls to the shape_value, shape_grad, and quadrature_point functions of the FEValues object. The way to do so will be explained in the following, while those parts of this function that are not changed with respect to the previous example are not commented on.
The first parts of the function are completely unchanged from before:
Next is the typical loop over all cells to compute local contributions and then to transfer them into the global matrix and vector. The only change in this part, compared to step-4, is that we will use the coefficient
function defined above to compute the coefficient value at each quadrature point.
With the matrix so built, we use zero boundary values again:
The solution process again looks mostly like in the previous examples. However, we will now use a preconditioned conjugate gradient algorithm. It is not very difficult to make this change. In fact, the only thing we have to alter is that we need an object which will act as a preconditioner. We will use SSOR (symmetric successive overrelaxation), with a relaxation factor of 1.2. For this purpose, the SparseMatrix
class has a function which does one SSOR step, and we need to package the address of this function together with the matrix on which it should act (which is the matrix to be inverted) and the relaxation factor into one object. The PreconditionSSOR
class does this for us. (PreconditionSSOR
class takes a template argument denoting the matrix type it is supposed to work on. The default value is SparseMatrix<double>
, which is exactly what we need here, so we simply stick with the default and do not specify anything in the angle brackets.)
Note that for the present case, SSOR doesn't really perform much better than most other preconditioners (though better than no preconditioning at all). A brief comparison of different preconditioners is presented in the Results section of the next tutorial program, step-6.
With this, the rest of the function is trivial: instead of the PreconditionIdentity
object we have created before, we now use the preconditioner we have declared, and the CG solver will do the rest for us:
Writing output to a file is mostly the same as for the previous example, but here we will show how to modify some output options and how to construct a different filename for each refinement cycle.
For this example, we would like to write the output directly to a file in Encapsulated Postscript (EPS) format. The library supports this, but things may be a bit more difficult sometimes, since EPS is a printing format, unlike most other supported formats which serve as input for graphical tools. Therefore, you can't scale or rotate the image after it has been written to disk, and you have to decide about the viewpoint or the scaling in advance.
The defaults in the library are usually quite reasonable, and regarding viewpoint and scaling they coincide with the defaults of Gnuplot. However, since this is a tutorial, we will demonstrate how to change them. For this, we first have to generate an object describing the flags for EPS output (similar flag classes exist for all supported output formats):
They are initialized with the default values, so we only have to change those that we don't like. For example, we would like to scale the z-axis differently (stretch each data point in z-direction by a factor of four):
Then we would also like to alter the viewpoint from which we look at the solution surface. The default is at an angle of 60 degrees down from the vertical axis, and 30 degrees rotated against it in mathematical positive sense. We raise our viewpoint a bit and look more along the y-axis:
That shall suffice. There are more flags, for example whether to draw the mesh lines, which data vectors to use for colorization of the interior of the cells, and so on. You may want to take a look at the documentation of the EpsFlags structure to get an overview of what is possible.
The only thing still to be done, is to tell the output object to use these flags:
The above way to modify flags requires recompilation each time we would like to use different flags. This is inconvenient, and we will see more advanced ways in step-19 where the output flags are determined at run time using an input file (step-19 doesn't show many other things; you should feel free to read over it even if you haven't done step-6 to step-18 yet).
Finally, we need the filename to which the results are to be written. We would like to have it of the form solution-N.eps
, where N is the number of the refinement cycle. Thus, we have to convert an integer to a part of a string; this is most easily done using the C++ function std::to_string
. With the so-constructed filename, we can then open an output stream and write the data to that file :
The second to last thing in this program is the definition of the run()
function. In contrast to the previous programs, we will compute on a sequence of meshes that after each iteration is globally refined. The function therefore consists of a loop over 6 cycles. In each cycle, we first print the cycle number, and then have to decide what to do with the mesh. If this is not the first cycle, we simply refine the existing mesh once globally. Before running through these cycles, however, we have to generate a mesh:
In previous examples, we have already used some of the functions from the GridGenerator
class. Here we would like to read a grid from a file where the cells are stored and which may originate from someone else, or may be the product of a mesh generator tool.
In order to read a grid from a file, we generate an object of data type GridIn and associate the triangulation to it (i.e. we tell it to fill our triangulation object when we ask it to read the file). Then we open the respective file and initialize the triangulation with the data in the file :
We would now like to read the file. However, the input file is only for a two-dimensional triangulation, while this function is a template for arbitrary dimension. Since this is only a demonstration program, we will not use different input files for the different dimensions, but rather quickly kill the whole program if we are not in 2D. Of course, since the main function below assumes that we are working in two dimensions we could skip this check, in this version of the program, without any ill effects.
It turns out that more than 90 per cent of programming errors are invalid function parameters such as invalid array sizes, etc, so we use assertions heavily throughout deal.II to catch such mistakes. For this, the Assert
macro is a good choice, since it makes sure that the condition which is given as first argument is valid, and if not throws an exception (its second argument) which will usually terminate the program giving information where the error occurred and what the reason was. (A longer discussion of what exactly the Assert
macro does can be found in the exception documentation module.) This generally reduces the time to find programming errors dramatically and we have found assertions an invaluable means to program fast.
On the other hand, all these checks (there are over 10,000 of them in the library at present) should not slow down the program too much if you want to do large computations. To this end, the Assert
macro is only used in debug mode and expands to nothing if in optimized mode. Therefore, while you test your program on small problems and debug it, the assertions will tell you where the problems are. Once your program is stable, you can switch off debugging and the program will run your real computations without the assertions and at maximum speed. More precisely: turning off all the checks in the library (which prevent you from calling functions with wrong arguments, walking off of arrays, etc.) by compiling your program in optimized mode usually makes things run about four times faster. Even though optimized programs are more performant, we still recommend developing in debug mode since it allows the library to find lots of common programming errors automatically. For those who want to try: The way to switch from debug mode to optimized mode is to recompile your program with the command make release
. The output of the make
program should now indicate to you that the program is now compiled in optimized mode, and it will later also be linked to libraries that have been compiled for optimized mode. In order to switch back to debug mode, simply recompile with the command make debug
.
ExcInternalError is a globally defined exception, which may be thrown whenever something is terribly wrong. Usually, one would like to use more specific exceptions, and particular in this case one would of course try to do something else if dim
is not equal to two, e.g. create a grid using library functions. Aborting a program is usually not a good idea and assertions should really only be used for exceptional cases which should not occur, but might due to stupidity of the programmer, user, or someone else. The situation above is not a very clever use of Assert, but again: this is a tutorial and it might be worth to show what not to do, after all.
So if we got past the assertion, we know that dim==2, and we can now actually read the grid. It is in UCD (unstructured cell data) format (though the convention is to use the suffix inp
for UCD files):
If you like to use another input format, you have to use one of the other grid_in.read_xxx
function. (See the documentation of the GridIn
class to find out what input formats are presently supported.)
The grid in the file describes a circle. Therefore we have to use a manifold object which tells the triangulation where to put new points on the boundary when the grid is refined. Unlike step-1, since GridIn does not know that the domain has a circular boundary (unlike GridGenerator::hyper_shell) we have to explicitly attach a manifold to the boundary after creating the triangulation to get the correct result when we refine the mesh.
Now that we have a mesh for sure, we write some output and do all the things that we have already seen in the previous examples.
main
functionThe main function looks mostly like the one in the previous example, so we won't comment on it further:
Here is the console output:
In each cycle, the number of cells quadruples and the number of CG iterations roughly doubles. Also, in each cycle, the program writes one output graphic file in EPS format. They are depicted in the following:
Due to the variable coefficient (the curvature there is reduced by the same factor by which the coefficient is increased), the top region of the solution is flattened. The gradient of the solution is discontinuous there, although this is not very clearly visible in the pictures above. We will look at this in more detail in the next example.