Reference documentation for deal.II version 9.6.0
|
This tutorial depends on step-7.
Table of contents | |
---|---|
This is a rather short example which only shows some aspects of using higher order mappings. By mapping we mean the transformation between the unit cell (i.e. the unit line, square, or cube) to the cells in real space. In all the previous examples, we have implicitly used linear or d-linear mappings; you will not have noticed this at all, since this is what happens if you do not do anything special. However, if your domain has curved boundaries, there are cases where the piecewise linear approximation of the boundary (i.e. by straight line segments) is not sufficient, and you want that your computational domain is an approximation to the real domain using curved boundaries as well. If the boundary approximation uses piecewise quadratic parabolas to approximate the true boundary, then we say that this is a quadratic or \(Q_2\) approximation. If we use piecewise graphs of cubic polynomials, then this is a \(Q_3\) approximation, and so on.
For some differential equations, it is known that piecewise linear approximations of the boundary, i.e. \(Q_1\) mappings, are not sufficient if the boundary of the exact domain is curved. Examples are the biharmonic equation using \(C^1\) elements, or the Euler equations of gas dynamics on domains with curved reflective boundaries. In these cases, it is necessary to compute the integrals using a higher order mapping. If we do not use such a higher order mapping, the order of approximation of the boundary dominates the order of convergence of the entire numerical scheme, irrespective of the order of convergence of the discretization in the interior of the domain.
Rather than demonstrating the use of higher order mappings with one of these more complicated examples, we do only a brief computation: calculating the value of \(\pi=3.141592653589793238462643\ldots\) by two different methods.
The first method uses a triangulated approximation of the circle with unit radius and integrates a unit magnitude constant function ( \(f = 1\)) over it. Of course, if the domain were the exact unit circle, then the area would be \(\pi\), but since we only use an approximation by piecewise polynomial segments, the value of the area we integrate over is not exactly \(\pi\). However, it is known that as we refine the triangulation, a \(Q_p\) mapping approximates the boundary with an order \(h^{p+1}\), where \(h\) is the mesh size. We will check the values of the computed area of the circle and their convergence towards \(\pi\) under mesh refinement for different mappings. We will also find a convergence behavior that is surprising at first, but has a good explanation.
The second method works similarly, but this time does not use the area of the triangulated unit circle, but rather its perimeter. \(\pi\) is then approximated by half of the perimeter, as we choose the radius equal to one.
The first of the following include files are probably well-known by now and need no further explanation.
This include file is new. Even if we are not solving a PDE in this tutorial, we want to use a dummy finite element with zero degrees of freedoms provided by the FE_Nothing class.
The following header file is also new: in it, we declare the MappingQ class which we will use for polynomial mappings of arbitrary order:
And this again is C++:
The last step is as in previous programs:
Then, the first task will be to generate some output. Since this program is so small, we do not employ object oriented techniques in it and do not declare classes (although, of course, we use the object oriented features of the library). Rather, we just pack the functionality into separate functions. We make these functions templates on the number of space dimensions to conform to usual practice when using deal.II, although we will only use them for two space dimensions and throw an exception when attempted to use for any other spatial dimension.
The first of these functions just generates a triangulation of a circle (hyperball) and outputs the \(Q_p\) mapping of its cells for different values of p
. Then, we refine the grid once and do so again.
So first generate a coarse triangulation of the circle and associate a suitable boundary description to it. By default, GridGenerator::hyper_ball attaches a SphericalManifold to the boundary (and uses FlatManifold for the interior) so we simply call that function and move on:
Then alternate between generating output on the current mesh for \(Q_1\), \(Q_2\), and \(Q_3\) mappings, and (at the end of the loop body) refining the mesh once globally.
For this, first set up an object describing the mapping. This is done using the MappingQ class, which takes as argument to the constructor the polynomial degree which it shall use.
As a side note, for a piecewise linear mapping, you could give a value of 1
to the constructor of MappingQ, but there is also a class MappingQ1 that achieves the same effect. Historically, it did a lot of things in a simpler way than MappingQ but is today just a wrapper around the latter. It is, however, still the class that is used implicitly in many places of the library if you do not specify another mapping explicitly.
In order to actually write out the present grid with this mapping, we set up an object which we will use for output. We will generate Gnuplot output, which consists of a set of lines describing the mapped triangulation. By default, only one line is drawn for each face of the triangulation, but since we want to explicitly see the effect of the mapping, we want to have the faces in more detail. This can be done by passing the output object a structure which contains some flags. In the present case, since Gnuplot can only draw straight lines, we output a number of additional points on the faces so that each face is drawn by 30 small lines instead of only one. This is sufficient to give us the impression of seeing a curved line, rather than a set of straight lines.
Finally, generate a filename and a file for output:
Then write out the triangulation to this file. The last argument of the function is a pointer to a mapping object. This argument has a default value, and if no value is given a simple MappingQ1 object is taken, which we briefly described above. This would then result in a piecewise linear approximation of the true boundary in the output.
At the end of the loop, refine the mesh globally.
Now we proceed with the main part of the code, the approximation of \(\pi\). The area of a circle is of course given by \(\pi r^2\), so having a circle of radius 1, the area represents just the number that is searched for. The numerical computation of the area is performed by integrating the constant function of value 1 over the whole computational domain, i.e. by computing the areas \(\int_K 1 dx=\int_{\hat K} 1
\ \textrm{det}\ J(\hat x) d\hat x \approx \sum_i \textrm{det}
\ J(\hat x_i)w(\hat x_i)\), where the sum extends over all quadrature points on all active cells in the triangulation, with \(w(x_i)\) being the weight of quadrature point \(x_i\). The integrals on each cell are approximated by numerical quadrature, hence the only additional ingredient we need is to set up a FEValues object that provides the corresponding JxW
values of each cell. (Note that JxW
is meant to abbreviate Jacobian determinant times weight; since in numerical quadrature the two factors always occur at the same places, we only offer the combined quantity, rather than two separate ones.) We note that here we won't use the FEValues object in its original purpose, i.e. for the computation of values of basis functions of a specific finite element at certain quadrature points. Rather, we use it only to gain the JxW
at the quadrature points, irrespective of the (dummy) finite element we will give to the constructor of the FEValues object. The actual finite element given to the FEValues object is not used at all, so we could give any.
For the numerical quadrature on all cells we employ a quadrature rule of sufficiently high degree. We choose QGauss that is of order 8 (4 points), to be sure that the errors due to numerical quadrature are of higher order than the order (maximal 6) that will occur due to the order of the approximation of the boundary, i.e. the order of the mappings employed. Note that the integrand, the Jacobian determinant, is not a polynomial function (rather, it is a rational one), so we do not use Gauss quadrature in order to get the exact value of the integral as done often in finite element computations, but could as well have used any quadrature formula of like order instead.
Now start by looping over polynomial mapping degrees=1..4:
First generate the triangulation, the boundary and the mapping object as already seen.
We now create a finite element. Unlike the rest of the example programs, we do not actually need to do any computations with shape functions; we only need the JxW
values from an FEValues object. Hence we use the special finite element class FE_Nothing which has exactly zero degrees of freedom per cell (as the name implies, the local basis on each cell is the empty set). A more typical usage of FE_Nothing is shown in step-46.
Likewise, we need to create a DoFHandler object. We do not actually use it, but it will provide us with active_cell_iterators
that are needed to reinitialize the FEValues object on each cell of the triangulation.
Now we set up the FEValues object, giving the Mapping, the dummy finite element and the quadrature object to the constructor, together with the update flags asking for the JxW
values at the quadrature points only. This tells the FEValues object that it needs not compute other quantities upon calling the reinit
function, thus saving computation time.
The most important difference in the construction of the FEValues object compared to previous example programs is that we pass a mapping object as first argument, which is to be used in the computation of the mapping from unit to real cell. In previous examples, this argument was omitted, resulting in the implicit use of an object of type MappingQ1.
We employ an object of the ConvergenceTable class to store all important data like the approximated values for \(\pi\) and the error with respect to the true value of \(\pi\). We will also use functions provided by the ConvergenceTable class to compute convergence rates of the approximations to \(\pi\).
Now we loop over several refinement steps of the triangulation.
In this loop we first add the number of active cells of the current triangulation to the table. This function automatically creates a table column with superscription cells
, in case this column was not created before.
Then we distribute the degrees of freedom for the dummy finite element. Strictly speaking we do not need this function call in our special case but we call it to make the DoFHandler happy – otherwise it would throw an assertion in the FEValues::reinit function below.
Now we loop over all cells, reinitialize the FEValues object for each cell, and add up all the JxW
values for this cell to area
...
...and store the resulting area values and the errors in the table:
We want to compute the convergence rates of the error
column. Therefore we need to omit the other columns from the convergence rate evaluation before calling evaluate_all_convergence_rates
Finally we set the precision and scientific mode for output of some of the quantities...
...and write the whole table to std::cout.
The following, second function also computes an approximation of \(\pi\) but this time via the perimeter \(2\pi r\) of the domain instead of the area. This function is only a variation of the previous function. So we will mainly give documentation for the differences.
We take the same order of quadrature but this time a dim-1
dimensional quadrature as we will integrate over (boundary) lines rather than over cells.
We loop over all degrees, create the triangulation, the boundary, the mapping, the dummy finite element and the DoFHandler object as seen before.
Then we create a FEFaceValues object instead of a FEValues object as in the previous function. Again, we pass a mapping as first argument.
Now we run over all cells and over all faces of each cell. Only the contributions of the JxW
values on boundary faces are added to the variable perimeter
.
We reinit the FEFaceValues object with the cell iterator and the number of the face.
Then store the evaluated values in the table...
...and end this function as we did in the previous one:
The following main function just calls the above functions in the order of their appearance. Apart from this, it looks just like the main functions of previous tutorial programs.
The program performs two tasks, the first being to generate a visualization of the mapped domain, the second to compute pi by the two methods described. Let us first take a look at the generated graphics. They are generated in Gnuplot format, and can be viewed with the commands
or using one of the other filenames. The second line makes sure that the aspect ratio of the generated output is actually 1:1, i.e. a circle is drawn as a circle on your screen, rather than as an ellipse. The third line switches off the key in the graphic, as that will only print information (the filename) which is not that important right now. Similarly, the fourth and fifth disable tick marks. The plot is then generated with a specific line width ("lw", here set to 4) and line type ("lt", here chosen by saying that the line should be drawn using the RGB color "black").
The following table shows the triangulated computational domain for \(Q_1\), \(Q_2\), and \(Q_3\) mappings, for the original coarse grid (left), and a once uniformly refined grid (right).
These pictures show the obvious advantage of higher order mappings: they approximate the true boundary quite well also on rather coarse meshes. To demonstrate this a little further, here is part of the upper right quarter circle of the coarse meshes with \(Q_2\) and \(Q_3\) mappings, where the dashed red line marks the actual circle:
Obviously the quadratic mapping approximates the boundary quite well, while for the cubic mapping the difference between approximated domain and true one is hardly visible already for the coarse grid. You can also see that the mapping only changes something at the outer boundaries of the triangulation. In the interior, all lines are still represented by linear functions, resulting in additional computations only on cells at the boundary. Higher order mappings are therefore usually not noticeably slower than lower order ones, because the additional computations are only performed on a small subset of all cells.
The second purpose of the program was to compute the value of pi to good accuracy. This is the output of this part of the program the last time it was updated:
One of the immediate observations from the output above is that in all cases the values converge quickly to the true value of \(\pi=3.141592653589793238462643\). Note that for the \(Q_4\) mapping, we are already in the regime of roundoff errors and the convergence rate levels off, which is already quite a lot. However, also note that for the \(Q_1\) mapping, even on the finest grid the accuracy is significantly worse than on the coarse grid for a \(Q_3\) mapping!
The last column of the output shows the convergence order, in powers of the mesh width \(h\). In the introduction, we had stated that the convergence order for a \(Q_p\) mapping should be \(h^{p+1}\). However, in the example shown, the order is rather \(h^{2p}\)! This at first surprising fact is explained by the properties of the \(Q_p\) mapping. At order p, it uses support points that are based on the p+1 point Gauss-Lobatto quadrature rule that selects the support points in such a way that the quadrature rule converges at order 2p. Even though these points are here only used for interpolation of a pth order polynomial, we get a superconvergence effect when numerically evaluating the integral, resulting in the observed high order of convergence. (This effect is also discussed in detail in the following publication: A. Bonito, A. Demlow, and J. Owen: "A priori error estimates for finite element approximations to eigenvalues and eigenfunctions of the Laplace-Beltrami operator", submitted, 2018.)
As the table of numbers copied from the output of the program shows above, it is not very difficult to compute the value of \(\pi\) to 13 or 15 digits. But, the output also shows that once we approach the level of accuracy with which double
precision numbers store information (namely, with roughly 16 digits of accuracy), we no longer see the expected convergence order and the error no longer decreases with mesh refinement as anticipated. This is because both within this code and within the many computations that happen within deal.II itself, each operation incurs an error on the order of \(10^{-16}\); adding such errors many times over then results in an error that may be on the order of \(10^{-14}\), which will dominate the discretization error after a number of refinement steps and consequently destroy the convergence rate.
The question is whether one can do anything about this. One thought is to use a higher-precision data type. For example, one could think of declaring both the area
and perimeter
variables in compute_pi_by_area()
and compute_pi_by_perimeter()
with data type long double
. long double
is a data type that is not well specified in the C++ standard but at least on Intel processors has around 19, instead of around 16, digits of accuracy. If we were to do that, we would get results that differ from the ones shown above. However, maybe counter-intuitively, they are not uniformly better. For example, when computing \(\pi\) by the area, at the time of writing these sentences we get these values with double
precision for degree 4:
On the other hand, the results are as follows when using long double
:
Indeed, here we get results that are approximately 50 times as accurate. On the other hand, when computing \(\pi\) by the perimeter, we get this with double
precision:
Whereas we get the following with long double
:
Here, using double
precision is more accurate by about a factor of two. (Of course, in all cases, we have computed \(\pi\) with more accuracy than any engineer would ever want to know.)
What explains this unpredictability? In general, round-off errors can be thought of as random, and add up in ways that are not worth thinking too much about; we should therefore always treat any accuracy beyond, say, thirteen digits as suspect. Thus, it is probably not worth spending too much time on wondering why we get different winners and losers in the data type exchange from double
and long double
. The accuracy of the results is also largely not determined by the precision of the data type in which we accumulate each cell's (or face's) contributions, but the accuracy of what deal.II gives us via FEValues::JxW() and FEFaceValues::JxW(), which always uses double
precision and which we cannot directly affect.
But there are cases where one can do something about the precision, and it is worth at least mentioning the name of the most well-known algorithm in this area. Specifically, what we are doing when we add contributions into the area
and perimeter
values is that we are adding together positive numbers as we do here. In general, the round-off errors associated with each of these numbers is random, and if we add up contributions of substantially different sizes, then we will likely be dominated by the error in the largest contributions. One can avoid this by adding up numbers sorted by their size, and this may then result in marginally more accurate end results. The algorithm that implements this is typically called Kahan's summation algorithm. While one could play with it in the current context, it is likely not going to improve the accuracy in ways that will truly matter.