Reference documentation for deal.II version 9.3.3
|
This tutorial depends on step-6.
Table of contents | |
---|---|
This program implements the heat equation
\begin{align*} \frac{\partial u(\mathbf x, t)}{\partial t} - \Delta u(\mathbf x, t) &= f(\mathbf x, t), \qquad\qquad && \forall \mathbf x \in \Omega, t\in (0,T), \\ u(\mathbf x, 0) &= u_0(\mathbf x) && \forall \mathbf x \in \Omega, \\ \\ u(\mathbf x, t) &= g(\mathbf x,t) && \forall \mathbf x \in \partial\Omega, t \in (0,T). \end{align*}
In some sense, this equation is simpler than the ones we have discussed in the preceding programs step-23, step-24, step-25, namely the wave equation. This is due to the fact that the heat equation smoothes out the solution over time, and is consequently more forgiving in many regards. For example, when using implicit time stepping methods, we can actually take large time steps, we have less trouble with the small disturbances we introduce through adapting the mesh every few time steps, etc.
Our goal here will be to solve the equations above using the theta-scheme that discretizes the equation in time using the following approach, where we would like \(u^n(\mathbf x)\) to approximate \(u(\mathbf x, t_n)\) at some time \(t_n\):
\begin{align*} \frac{u^n(\mathbf x)-u^{n-1}(\mathbf x)}{k_n} - \left[ (1-\theta)\Delta u^{n-1}(\mathbf x) + \theta\Delta u^n(\mathbf x) \right] &= \left[ (1-\theta)f(\mathbf x, t_{n-1}) + \theta f(\mathbf x, t_n) \right]. \end{align*}
Here, \(k_n=t_n-t_{n-1}\) is the time step size. The theta-scheme generalizes the explicit Euler ( \(\theta=0\)), implicit Euler ( \(\theta=1\)) and Crank-Nicolson ( \(\theta=\frac 12\)) time discretizations. Since the latter has the highest convergence order, we will choose \(\theta=\frac 12\) in the program below, but make it so that playing with this parameter remains simple. (If you are interested in playing with higher order methods, take a look at step-52.)
Given this time discretization, space discretization happens as it always does, by multiplying with test functions, integrating by parts, and then restricting everything to a finite dimensional subspace. This yields the following set of fully discrete equations after multiplying through with \(k_n\):
\begin{align*} M U^n-MU^{n-1} + k_n \left[ (1-\theta)A U^{n-1} + \theta A U^n \right] &= k_n \left[ (1-\theta)F^{n-1} + \theta F^n \right], \end{align*}
where \(M\) is the mass matrix and \(A\) is the stiffness matrix that results from discretizing the Laplacian. Bringing all known quantities to the right hand side yields the linear system we have to solve in every step:
\begin{align*} (M + k_n \theta A) U^n &= MU^{n-1} - k_n (1-\theta)A U^{n-1} + k_n \left[ (1-\theta)F^{n-1} + \theta F^n \right]. \end{align*}
The linear system on the left hand side is symmetric and positive definite, so we should have no trouble solving it with the Conjugate Gradient method.
We can start the iteration above if we have the set of nodal coefficients \(U^0\) at the initial time. Here, we take the ones we get by interpolating the initial values \(u_0(\mathbf x)\) onto the mesh used for the first time step. We will also need to choose a time step; we will here just choose it as fixed, but clearly advanced simulators will want to choose it adaptively. We will briefly come back to this in the results section below.
When solving the wave equation and its variants in the previous few programs, we kept the mesh fixed. Just as for stationary equations, one can make a good case that this is not the smartest approach and that significant savings can be had by adapting the mesh. There are, however, significant difficulties compared to the stationary case. Let us go through them in turn:
Time step size and minimal mesh size: For stationary problems, the general approach is "make the mesh as fine as it is necessary". For problems with singularities, this often leads to situations where we get many levels of refinement into corners or along interfaces. The very first tutorial to use adaptive meshes, step-6, is a point in case already.
However, for time dependent problems, we typically need to choose the time step related to the mesh size. For explicit time discretizations, this is obvious, since we need to respect a CFL condition that ties the time step size to the smallest mesh size. For implicit time discretizations, no such hard restriction exists, but in practice we still want to make the time step smaller if we make the mesh size smaller since we typically have error estimates of the form \(\|e\| \le {\cal O}(k^p + h^q)\) where \(p,q\) are the convergence orders of the time and space discretization, respectively. We can only make the error small if we decrease both terms. Ideally, an estimate like this would suggest to choose \(k \propto h^{q/p}\). Because, at least for problems with non-smooth solutions, the error is typically localized in the cells with the smallest mesh size, we have to indeed choose \(k \propto h_{\text{min}}^{q/p}\), using the smallest mesh size.
The consequence is that refining the mesh further in one place implies not only the moderate additional effort of increasing the number of degrees of freedom slightly, but also the much larger effort of having the solve the global linear system more often because of the smaller time step.
In practice, one typically deals with this by acknowledging that we can not make the time step arbitrarily small, and consequently can not make the local mesh size arbitrarily small. Rather, we set a maximal level of refinement and when we flag cells for refinement, we simply do not refine those cells whose children would exceed this maximal level of refinement.
There is a similar problem in that we will choose a right hand side that will switch on in different parts of the domain at different times. To avoid being caught flat footed with too coarse a mesh in areas where we suddenly need a finer mesh, we will also enforce in our program a minimal mesh refinement level.
Test functions from different meshes: Let us consider again the semi-discrete equations we have written down above:
\begin{align*} \frac{u^n(\mathbf x)-u^{n-1}(\mathbf x)}{k_n} - \left[ (1-\theta)\Delta u^{n-1}(\mathbf x) + \theta\Delta u^n(\mathbf x) \right] &= \left[ (1-\theta)f(\mathbf x, t_{n-1}) + \theta f(\mathbf x, t_n) \right]. \end{align*}
We can here consider \(u^{n-1}\) as data since it has presumably been computed before. Now, let us replace
\begin{align*} u^n(\mathbf x)\approx u_h^n(\mathbf x) = \sum_j U^n \varphi_j(\mathbf x), \end{align*}
multiply with test functions \(\varphi_i(\mathbf x)\) and integrate by parts where necessary. In a process as outlined above, this would yield
\begin{align*} \sum_j (M + k_n \theta A)_{ij} U^n_j &= (\varphi_i, u_h^{n-1}) - k_n (1-\theta)(\nabla \varphi_i, \nabla u_h^{n-1}) + k_n \left[ (1-\theta)F^{n-1} + \theta F^n \right]. \end{align*}
Now imagine that we have changed the mesh between time steps \(n-1\) and \(n\). Then the problem is that the basis functions we use for \(u_h^n\) and \(u^{n-1}\) are different! This pertains to the terms on the right hand side, the first of which we could more clearly write as (the second follows the same pattern)
\begin{align*} (\varphi_i, u_h^{n-1}) = (\varphi_i^n, u_h^{n-1}) = \sum_{j=1}^{N_{n-1}} (\varphi_i^n, \varphi_j^{n-1}) U^{n-1}_j, \qquad\qquad i=1\ldots N_n. \end{align*}
If the meshes used in these two time steps are the same, then \((\varphi_i^n, \varphi_j^{n-1})\) forms a square mass matrix \(M_{ij}\). However, if the meshes are not the same, then in general the matrix is rectangular. Worse, it is difficult to even compute these integrals because if we loop over the cells of the mesh at time step \(n\), then we need to evaluate \(\varphi_j^{n-1}\) at the quadrature points of these cells, but they do not necessarily correspond to the cells of the mesh at time step \(n-1\) and \(\varphi_j^{n-1}\) is not defined via these cells; the same of course applies if we wanted to compute the integrals via integration on the cells of mesh \(n-1\).
In any case, what we have to face is a situation where we need to integrate shape functions defined on two different meshes. This can be done, and is in fact demonstrated in step-28, but the process is at best described by the word "awkward".
In practice, one does not typically want to do this. Rather, we avoid the whole situation by interpolating the solution from the old to the new mesh every time we adapt the mesh. In other words, rather than solving the equations above, we instead solve the problem
\begin{align*} \sum_j (M + k_n \theta A)_{ij} U^n_j &= (\varphi_i, I_h^n u_h^{n-1}) - k_n (1-\theta)(\nabla \varphi_i, \nabla I_h^n u_h^{n-1}) + k_n \left[ (1-\theta)F^{n-1} + \theta F^n \right], \end{align*}
where \(I_h^n\) is the interpolation operator onto the finite element space used in time step \(n\). This is not the optimal approach since it introduces an additional error besides time and space discretization, but it is a pragmatic one that makes it feasible to do time adapting meshes.
There are a number of things one can typically get wrong when implementing a finite element code. In particular, for time dependent problems, the following are common sources of bugs:
A less common problem is getting the initial conditions wrong because one can typically see that it is wrong by just outputting the first time step. In any case, in order to verify the correctness of the code, it is helpful to have a testing protocol that allows us to verify each of these components separately. This means:
This sounds complicated, but fortunately, for linear partial differential equations without coefficients (or constant coefficients) like the one here, there is a fairly standard protocol that rests on the following observation: if you choose as your domain a square \([0,1]^2\) (or, with slight modifications, a rectangle), then the exact solution can be written as
\begin{align*} u(x,y,t) = a(t) \sin(n_x \pi x) \sin(n_y \pi y) \end{align*}
(with integer constants \(n_x,n_y\)) if only the initial condition, right hand side and boundary values are all of the form \(\sin(n_x \pi x) \sin(n_y \pi y)\) as well. This is due to the fact that the function \(\sin(n_x \pi x) \sin(n_y \pi y)\) is an eigenfunction of the Laplace operator and allows us to compute things like the time factor \(a(t)\) analytically and, consequently, compare with what we get numerically.
As an example, let us consider the situation where we have \(u_0(x,y)=\sin(n_x \pi x) \sin(n_x \pi y)\) and \(f(x,y,t)=0\). With the claim (ansatz) of the form for \(u(x,y,t)\) above, we get that
\begin{align*} \left(\frac{\partial}{\partial t} -\Delta\right) u(x,y,t) &= \left(\frac{\partial}{\partial t} -\Delta\right) a(t) \sin(n_x \pi x) \sin(n_y \pi y) \\ &= \left(a'(t) + (n_x^2+n_y^2)\pi^2 a(t) \right) \sin(n_x \pi x) \sin(n_y \pi y). \end{align*}
For this to be equal to \(f(x,y,t)=0\), we need that
\begin{align*} a'(t) + (n_x^2+n_y^2)\pi^2 a(t) = 0 \end{align*}
and due to the initial conditions, \(a(0)=1\). This differential equation can be integrated to yield
\begin{align*} a(t) = - e^{-(n_x^2+n_y^2)\pi^2 t}. \end{align*}
In other words, if the initial condition is a product of sines, then the solution has exactly the same shape of a product of sines that decays to zero with a known time dependence. This is something that is easy to test if you have a sufficiently fine mesh and sufficiently small time step.
What is typically going to happen if you get the time integration scheme wrong (e.g., by having the wrong factors of \(\theta\) or \(k\) in front of the various terms) is that you don't get the right temporal behavior of the solution. Double check the various factors until you get the right behavior. You may also want to verify that the temporal decay rate (as determined, for example, by plotting the value of the solution at a fixed point) does not double or halve each time you double or halve the time step or mesh size. You know that it's not the handling of the boundary conditions or right hand side because these were both zero.
If you have so verified that the time integrator is correct, take the situation where the right hand side is nonzero but the initial conditions are zero: \(u_0(x,y)=0\) and \(f(x,y,t)=\sin(n_x \pi x) \sin(n_x \pi y)\). Again,
\begin{align*} \left(\frac{\partial}{\partial t} -\Delta\right) u(x,y,t) &= \left(\frac{\partial}{\partial t} -\Delta\right) a(t) \sin(n_x \pi x) \sin(n_y \pi y) \\ &= \left(a'(t) + (n_x^2+n_y^2)\pi^2 a(t) \right) \sin(n_x \pi x) \sin(n_y \pi y), \end{align*}
and for this to be equal to \(f(x,y,t)\), we need that
\begin{align*} a'(t) + (n_x^2+n_y^2)\pi^2 a(t) = 1 \end{align*}
and due to the initial conditions, \(a(0)=0\). Integrating this equation in time yields
\begin{align*} a(t) = \frac{1}{(n_x^2+n_y^2)\pi^2} \left[ 1 - e^{-(n_x^2+n_y^2)\pi^2 t} \right]. \end{align*}
Again, if you have the wrong factors of \(\theta\) or \(k\) in front of the right hand side terms you will either not get the right temporal behavior of the solution, or it will converge to a maximum value other than \(\frac{1}{(n_x^2+n_y^2)\pi^2}\).
Once we have verified that the time integration and right hand side handling are correct using this scheme, we can go on to verifying that we have the boundary values correct, using a very similar approach.
Solving the heat equation on a simple domain with a simple right hand side almost always leads to solutions that are exceedingly boring, since they become very smooth very quickly and then do not move very much any more. Rather, we here solve the equation on the L-shaped domain with zero Dirichlet boundary values and zero initial conditions, but as right hand side we choose
\begin{align*} f(\mathbf x, t) = \left\{ \begin{array}{ll} \chi_1(\mathbf x) & \text{if \(0\le t \le 0.2\tau\) or \(\tau\le t \le 1.2\tau\) or \(2\tau\le t \le 2.2\tau\), etc} \\ \chi_2(\mathbf x) & \text{if \(0.5\le t \le 0.7\tau\) or \(1.5\tau\le t \le 1.7\tau\) or \(2.5\tau\le t \le 2.7\tau\), etc} \\ 0 & \text{otherwise} \end{array} \right. \end{align*}
Here,
\begin{align*} \chi_1(\mathbf x) &= \left\{ \begin{array}{ll} 1 & \text{if \(x>0.5\) and \(y>-0.5\)} \\ 0 & \text{otherwise} \end{array} \right. \\ \chi_2(\mathbf x) &= \left\{ \begin{array}{ll} 1 & \text{if \(x>-0.5\) and \(y>0.5\)} \\ 0 & \text{otherwise} \end{array} \right. \end{align*}
In other words, in every period of length \(\tau\), the right hand side first flashes on in domain 1, then off completely, then on in domain 2, then off completely again. This pattern is probably best observed via the little animation of the solution shown in the results section.
If you interpret the heat equation as finding the spatially and temporally variable temperature distribution of a conducting solid, then the test case above corresponds to an L-shaped body where we keep the boundary at zero temperature, and heat alternatingly in two parts of the domain. While heating is in effect, the temperature rises in these places, after which it diffuses and diminishes again. The point of these initial conditions is that they provide us with a solution that has singularities both in time (when sources switch on and off) as well as time (at the reentrant corner as well as at the edges and corners of the regions where the source acts).
The program starts with the usual include files, all of which you should have seen before by now:
Then the usual placing of all content of this program into a namespace and the importation of the deal.II namespace into the one we will work in:
HeatEquation
classThe next piece is the declaration of the main class of this program. It follows the well trodden path of previous examples. If you have looked at step-6, for example, the only thing worth noting here is that we need to build two matrices (the mass and Laplace matrix) and keep the current and previous time step's solution. We then also need to store the current time, the size of the time step, and the number of the current time step. The last of the member variables denotes the theta parameter discussed in the introduction that allows us to treat the explicit and implicit Euler methods as well as the Crank-Nicolson method and other generalizations all in one program.
As far as member functions are concerned, the only possible surprise is that the refine_mesh
function takes arguments for the minimal and maximal mesh refinement level. The purpose of this is discussed in the introduction.
In the following classes and functions, we implement the various pieces of data that define this problem (right hand side and boundary values) that are used in this program and for which we need function objects. The right hand side is chosen as discussed at the end of the introduction. For boundary values, we choose zero values, but this is easily changed below.
HeatEquation
implementationIt is time now for the implementation of the main class. Let's start with the constructor which selects a linear element, a time step constant at 1/500 (remember that one period of the source on the right hand side was set to 0.2 above, so we resolve each period with 100 time steps) and chooses the Crank Nicolson method by setting \(\theta=1/2\).
HeatEquation::setup_system
The next function is the one that sets up the DoFHandler object, computes the constraints, and sets the linear algebra objects to their correct sizes. We also compute the mass and Laplace matrix here by simply calling two functions in the library.
Note that we do not take the hanging node constraints into account when assembling the matrices (both functions have an AffineConstraints argument that defaults to an empty object). This is because we are going to condense the constraints in run() after combining the matrices for the current time-step.
HeatEquation::solve_time_step
The next function is the one that solves the actual linear system for a single time step. There is nothing surprising here:
HeatEquation::output_results
Neither is there anything new in generating graphical output other than the fact that we tell the DataOut object what the current time and time step number is, so that this can be written into the output file :
HeatEquation::refine_mesh
This function is the interesting part of the program. It takes care of the adaptive mesh refinement. The three tasks this function performs is to first find out which cells to refine/coarsen, then to actually do the refinement and eventually transfer the solution vectors between the two different grids. The first task is simply achieved by using the well-established Kelly error estimator on the solution. The second task is to actually do the remeshing. That involves only basic functions as well, such as the refine_and_coarsen_fixed_fraction
that refines those cells with the largest estimated error that together make up 60 per cent of the error, and coarsens those cells with the smallest error that make up for a combined 40 per cent of the error. Note that for problems such as the current one where the areas where something is going on are shifting around, we want to aggressively coarsen so that we can move cells around to where it is necessary.
As already discussed in the introduction, too small a mesh leads to too small a time step, whereas too large a mesh leads to too little resolution. Consequently, after the first two steps, we have two loops that limit refinement and coarsening to an allowable range of cells:
These two loops above are slightly different but this is easily explained. In the first loop, instead of calling triangulation.end()
we may as well have called triangulation.end_active(max_grid_level)
. The two calls should yield the same iterator since iterators are sorted by level and there should not be any cells on levels higher than on level max_grid_level
. In fact, this very piece of code makes sure that this is the case.
As part of mesh refinement we need to transfer the solution vectors from the old mesh to the new one. To this end we use the SolutionTransfer class and we have to prepare the solution vectors that should be transferred to the new grid (we will lose the old grid once we have done the refinement so the transfer has to happen concurrently with refinement). At the point where we call this function, we will have just computed the solution, so we no longer need the old_solution variable (it will be overwritten by the solution just after the mesh may have been refined, i.e., at the end of the time step; see below). In other words, we only need the one solution vector, and we copy it to a temporary object where it is safe from being reset when we further down below call setup_system()
.
Consequently, we initialize a SolutionTransfer object by attaching it to the old DoF handler. We then prepare the triangulation and the data vector for refinement (in this order).
Now everything is ready, so do the refinement and recreate the DoF structure on the new grid, and finally initialize the matrix structures and the new vectors in the setup_system
function. Next, we actually perform the interpolation of the solution from old to new grid. The final step is to apply the hanging node constraints to the solution vector, i.e., to make sure that the values of degrees of freedom located on hanging nodes are so that the solution is continuous. This is necessary since SolutionTransfer only operates on cells locally, without regard to the neighborhood.
HeatEquation::run
This is the main driver of the program, where we loop over all time steps. At the top of the function, we set the number of initial global mesh refinements and the number of initial cycles of adaptive mesh refinement by repeating the first time step a few times. Then we create a mesh, initialize the various objects we will work with, set a label for where we should start when re-running the first time step, and interpolate the initial solution onto out mesh (we choose the zero function here, which of course we could do in a simpler way by just setting the solution vector to zero). We also output the initial time step once.
goto
statement in this piece of code! goto
statements are not particularly well liked any more since Edsgar Dijkstra, one of the greats of computer science, wrote a letter in 1968 called "Go To Statement considered harmful" (see here). The author of this code subscribes to this notion whole-heartedly: goto
is hard to understand. In fact, deal.II contains virtually no occurrences: excluding code that was essentially transcribed from books and not counting duplicated code pieces, there are 3 locations in about 600,000 lines of code at the time this note is written; we also use it in 4 tutorial programs, in exactly the same context as here. Instead of trying to justify the occurrence here, let's first look at the code and we'll come back to the issue at the end of function.Then we start the main loop until the computed time exceeds our end time of 0.5. The first task is to build the right hand side of the linear system we need to solve in each time step. Recall that it contains the term \(MU^{n-1}-(1-\theta)k_n AU^{n-1}\). We put these terms into the variable system_rhs, with the help of a temporary vector:
The second piece is to compute the contributions of the source terms. This corresponds to the term \(k_n \left[ (1-\theta)F^{n-1} + \theta F^n \right]\). The following code calls VectorTools::create_right_hand_side to compute the vectors \(F\), where we set the time of the right hand side (source) function before we evaluate it. The result of this all ends up in the forcing_terms variable:
Next, we add the forcing terms to the ones that come from the time stepping, and also build the matrix \(M+k_n\theta A\) that we have to invert in each time step. The final piece of these operations is to eliminate hanging node constrained degrees of freedom from the linear system:
There is one more operation we need to do before we can solve it: boundary values. To this end, we create a boundary value object, set the proper time to the one of the current time step, and evaluate it as we have done many times before. The result is used to also set the correct boundary values in the linear system:
With this out of the way, all we have to do is solve the system, generate graphical data, and...
...take care of mesh refinement. Here, what we want to do is (i) refine the requested number of times at the very beginning of the solution procedure, after which we jump to the top to restart the time iteration, (ii) refine every fifth time step after that.
The time loop and, indeed, the main part of the program ends with starting into the next time step by setting old_solution to the solution we have just computed.
Now that you have seen what the function does, let us come back to the issue of the goto
. In essence, what the code does is something like this:
Here, the condition "happy with the result" is whether we'd like to keep the current mesh or would rather refine the mesh and start over on the new mesh. We could of course replace the use of the goto
by the following:
This has the advantage of getting rid of the goto
but the disadvantage of having to duplicate the code that implements the "solve timestep" and "postprocess" operations in two different places. This could be countered by putting these parts of the code (sizable chunks in the actual implementation above) into their own functions, but a while(true)
loop with a break
statement is not really all that much easier to read or understand than a goto
.
In the end, one might simply agree that in general goto
statements are a bad idea but be pragmatic and state that there may be occasions where they can help avoid code duplication and awkward control flow. This may be one of these places, and it matches the position Steve McConnell takes in his excellent book "Code Complete" [87] about good programming practices (see the mention of this book in the introduction of step-1) that spends a surprising ten pages on the question of goto
in general.
main
functionHaving made it this far, there is, again, nothing much to discuss for the main function of this program: it looks like all such functions since step-6.
As in many of the tutorials, the actual output of the program matters less than how we arrived there. Nonetheless, here it is:
Maybe of more interest is a visualization of the solution and the mesh on which it was computed:
The movie shows how the two sources switch on and off and how the mesh reacts to this. It is quite obvious that the mesh as is is probably not the best we could come up with. We'll get back to this in the next section.
There are at least two areas where one can improve this program significantly: adaptive time stepping and a better choice of the mesh.
Having chosen an implicit time stepping scheme, we are not bound by any CFL-like condition on the time step. Furthermore, because the time scales on which change happens on a given cell in the heat equation are not bound to the cells diameter (unlike the case with the wave equation, where we had a fixed speed of information transport that couples the temporal and spatial scales), we can choose the time step as we please. Or, better, choose it as we deem necessary for accuracy.
Looking at the solution, it is clear that the action does not happen uniformly over time: a lot is changing around the time we switch on a source, things become less dramatic once a source is on for a little while, and we enter a long phase of decline when both sources are off. During these times, we could surely get away with a larger time step than before without sacrificing too much accuracy.
The literature has many suggestions on how to choose the time step size adaptively. Much can be learned, for example, from the way ODE solvers choose their time steps. One can also be inspired by a posteriori error estimators that can, ideally, be written in a way that the consist of a temporal and a spatial contribution to the overall error. If the temporal one is too large, we should choose a smaller time step. Ideas in this direction can be found, for example, in the PhD thesis of a former principal developer of deal.II, Ralf Hartmann, published by the University of Heidelberg, Germany, in 2002.
We here use one of the simpler time stepping methods, namely the second order in time Crank-Nicolson method. However, more accurate methods such as Runge-Kutta methods are available and should be used as they do not represent much additional effort. It is not difficult to implement this for the current program, but a more systematic treatment is also given in step-52.
If you look at the meshes in the movie above, it is clear that they are not particularly well suited to the task at hand. In fact, they look rather random.
There are two factors at play. First, there are some islands where cells have been refined but that are surrounded by non-refined cells (and there are probably also a few occasional coarsened islands). These are not terrible, as they most of the time do not affect the approximation quality of the mesh, but they also don't help because so many of their additional degrees of freedom are in fact constrained by hanging node constraints. That said, this is easy to fix: the Triangulation class takes an argument to its constructor indicating a level of "mesh smoothing". Passing one of many possible flags, this instructs the triangulation to refine some additional cells, or not to refine some cells, so that the resulting mesh does not have these artifacts.
The second problem is more severe: the mesh appears to lag the solution. The underlying reason is that we only adapt the mesh once every fifth time step, and only allow for a single refinement in these cases. Whenever a source switches on, the solution had been very smooth in this area before and the mesh was consequently rather coarse. This implies that the next time step when we refine the mesh, we will get one refinement level more in this area, and five time steps later another level, etc. But this is not enough: first, we should refine immediately when a source switches on (after all, in the current context we at least know what the right hand side is), and we should allow for more than one refinement level. Of course, all of this can be done using deal.II, it just requires a bit of algorithmic thinking in how to make this work!
To increase the accuracy and resolution of your simulation in time, one typically decreases the time step size \(k_n\). If you start playing around with the time step in this particular example, you will notice that the solution becomes partly negative, if \(k_n\) is below a certain threshold. This is not what we would expect to happen (in nature).
To get an idea of this behavior mathematically, let us consider a general, fully discrete problem:
\begin{align*} A u^{n} = B u^{n-1}. \end{align*}
The general form of the \(i\)th equation then reads:
\begin{align*} a_{ii} u^{n}_i &= b_{ii} u^{n-1}_i + \sum\limits_{j \in S_i} \left( b_{ij} u^{n-1}_j - a_{ij} u^{n}_j \right), \end{align*}
where \(S_i\) is the set of degrees of freedom that DoF \(i\) couples with (i.e., for which either the matrix \(A\) or matrix \(B\) has a nonzero entry at position \((i,j)\)). If all coefficients fulfill the following conditions:
\begin{align*} a_{ii} &> 0, & b_{ii} &\geq 0, & a_{ij} &\leq 0, & b_{ij} &\geq 0, & \forall j &\in S_i, \end{align*}
all solutions \(u^{n}\) keep their sign from the previous ones \(u^{n-1}\), and consequently from the initial values \(u^0\). See e.g. Kuzmin, Hämäläinen for more information on positivity preservation.
Depending on the PDE to solve and the time integration scheme used, one is able to deduce conditions for the time step \(k_n\). For the heat equation with the Crank-Nicolson scheme, Schatz et. al. have translated it to the following ones:
\begin{align*} (1 - \theta) k a_{ii} &\leq m_{ii},\qquad \forall i, & \theta k \left| a_{ij} \right| &\geq m_{ij},\qquad j \neq i, \end{align*}
where \(M = m_{ij}\) denotes the mass matrix and \(A = a_{ij}\) the stiffness matrix with \(a_{ij} \leq 0\) for \(j \neq i\), respectively. With \(a_{ij} \leq 0\), we can formulate bounds for the global time step \(k\) as follows:
\begin{align*} k_{\text{max}} &= \frac{ 1 }{ 1 - \theta } \min\left( \frac{ m_{ii} }{ a_{ii} } \right),~ \forall i, & k_{\text{min}} &= \frac{ 1 }{ \theta } \max\left( \frac{ m_{ij} }{ \left|a_{ij}\right| } \right),~ j \neq i. \end{align*}
In other words, the time step is constrained by both a lower and upper bound in case of a Crank-Nicolson scheme. These bounds should be considered along with the CFL condition to ensure significance of the performed simulations.
Being unable to make the time step as small as we want to get more accuracy without losing the positivity property is annoying. It raises the question of whether we can at least compute the minimal time step we can choose to ensure positivity preservation in this particular tutorial. Indeed, we can use the SparseMatrix objects for both mass and stiffness that are created via the MatrixCreator functions. Iterating through each entry via SparseMatrixIterators lets us check for diagonal and off-diagonal entries to set a proper time step dynamically. For quadratic matrices, the diagonal element is stored as the first member of a row (see SparseMatrix documentation). An exemplary code snippet on how to grab the entries of interest from the mass_matrix
is shown below.
Using the information so computed, we can bound the time step via the formulas above.