![]() |
deal.II version GIT relicensing-3540-g7552a02177 2025-06-20 13:50:00+00:00
|
This tutorial depends on step-4, step-8.
Table of contents | |
---|---|
This program was contributed by Sam Scheuerman and Wolfgang Bangerth.
In all previous tutorial programs, we have only ever dealt with discretizations of partial differential equations posed on a Triangulation where every degree of freedom was associated with a local geometric element (vertices, lines, quadrangles, etc.) which is clearly part of one cell or another (or, in the worst case, some small number of adjacent cells). This was true whether we were solving a single differential equation (say, in step-4 or step-6), or a system of coupled equations (step-8, step-22, and many others). In other words, the unknowns we were considering were always functions of space (and sometimes time) that we discretized on a Triangulation.
There are also situations where we have unknowns in the equations we consider that are not functions but are scalars instead. The example we will consider here to illustrate this kind of situation is an optimization problem: Let's assume we have a body to which we have attached four heaters. How should we set the power levels \(C_1,\ldots, C_4\) of these four heaters so that the (steady state) temperature \(u(\mathbf x)\) of the body is as close as possible to a desired state? The issue here is that we have one unknown that is a function (namely, the temperature \(u(\mathbf x)\)) for which we can apply the usual finite element approximation, but we also have four scalars \(C_i\) that are not functions and that, after discretization of the whole problem, are not tied to one of the cells of the mesh. As a consequence, we call these unknowns non-local degrees of freedom to represent that they are not tied to a specific locality: They represent something that does not depend to the spatial variable \(\mathbf x\) at all, and is not the result of a discretization process.
Before going into how we solve this issue, let us first state the concrete set of equations.
We want to find a solution \(u\) to the Poisson equation
\begin{align*} -\Delta u =& f \text{ in }\Omega\\ u =&0 \text{ on } \partial\Omega, \end{align*}
which is as close as possible to a desired solution \(\overline{u}\). We can formulate this as an optimization problem
\begin{align*} \min_{u, f} & \frac 12 ||u-\overline{u}||^2\\ \text{s.t.} & -\Delta u = f. \end{align*}
Instead of allowing any source function \(f\), we will assume that the heat source is described by four heating pads \(f_k(\mathbf x)\) that are characteristic functions of a subset of the domain, times a heater setting \(C^k\) (perhaps the power level you set each heating pad to). In other words, we have that \(f(\mathbf x) = \sum_{k=0}^m C^k f_k(\mathbf x)\). The optimization problem then reads as follows:
\begin{align*} \min_{u, C^k} & \frac 12 ||u-\overline{u}||^2\\ \text{s.t.} & -\Delta u = \sum_k C^k f_k. \end{align*}
To write this optimization problem in a more compact form, we introduce a Lagrangian
\begin{align*} L = \frac{1}{2}||u - \overline{u}||^2 - \left(\lambda, \left(-\Delta u - \sum_k C^k f_k\right)\right), \end{align*}
where \(\lambda = \lambda(\mathbf{x})\) is a Lagrange multiplier and \((\cdot, \cdot)\) denotes the usual \(L^2\) scalar product (note that \(\lambda\) is a function that lives in the space of test functions, not just a scalar). We do this because finding a critical point of \(L\) is equivalent to solving the original optimization problem, and we can find a critical point of \(L\) by finding roots of its derivatives. At this point, we have a choice: we can either derive the optimality conditions first, then discretize the problem (in the literature this is called the optimize then discretize approach), or we can first discretize \(L\) and then determine the optimality conditions for the discrete system (discretize then optimize). We will use the second approach here, because the formulation is a more straightforward. The discretized Lagrangian is
\begin{align*} L =& \frac{1}{2}||u_h - \overline{u}||^2 - (\lambda_h, (-\Delta u_h - f))\\ =& \frac{1}{2}\left[\sum_{i=0}^n\sum_{j=0}^n U^iU^j\int_{\Omega}\varphi_i\varphi_j - 2\sum_{i=0}^n U^i\int_\Omega \varphi_i \overline{u} + \int_\Omega\overline{u}^2\right] - \sum_{i=0}^n\sum_{j=0}^n\left[\Lambda^iU^j\int_{\Omega}\nabla\varphi_i\cdot\nabla\varphi_j\right] + \sum_{i=0}^n \sum_{k=0}^m \left[\Lambda^i C^k \int_{\Omega}\varphi_i f_k\right]. \end{align*}
where
\begin{align*} u_h(x) =& \sum_i^n U^i\varphi_i(x),\\ \lambda_h =& \sum_j^n \Lambda^j\varphi_j,\text{ and}\\ f =& \sum_{k=0}^m C^k f_k. \end{align*}
Note the application of integration by parts above, which makes use of the Dirichlet boundary conditions to eliminate the boundary integral. In practice, we assume that each \(f_k\) is something simple. In this example step, each \(f_k\) is a characteristic function as mentioned, and \(m=4\). These functions, plotted together, look like this:
Note: At this point, we can clearly see where the nonlocal degrees of freedom are coming from. The term \(\int_\Omega \varphi_i f_k\) is an integral which cannot a priori be confined to a single cell or small collection of cells, but rather must be evaluated on all cells that lie in the domain of \(f\).
To find a critical point, we take a partial derivative with respect to each \(U^i\), \(\Lambda^j\), and \(C^k\). The derivatives have the following form:
\begin{align*} \frac{\partial L}{\partial U^i} =& \sum_{j=0}^nU^j\int_{\Omega}\varphi_i\varphi_j - \int_\Omega \varphi_i\overline{u} - \sum_{j=0}^n\left[\Lambda^j\int_{\Omega}\nabla\varphi_i\cdot\nabla\varphi_j\right]\\ \frac{\partial L}{\partial \Lambda^j} =& -\sum_{i=0}^n\left[U^i\int_{\Omega}\nabla\varphi_i\cdot\nabla\varphi_j\right] + \sum_{k=0}^m C^k\int_{\Omega}\varphi_j f_k\\ \frac{\partial L}{\partial C^k} =& \sum_{j=0}^n \Lambda^j\int_{\Omega}\varphi_j f_k. \end{align*}
Note the paired indices on the left- and right-hand sides. As with many other minimization problems, we can observe that for a critical point to occur, these derivatives must all be 0. This gives us \(n + n + m\) linear equations to solve for the coefficient vectors \(U\), \(\Lambda\), and \(C\):
\begin{align*} \frac{\partial L}{\partial U^i} =& \sum_{j=0}^nU^j\int_{\Omega}\varphi_i\varphi_j - \int_\Omega\varphi_i\overline{u} - \sum_{j=0}^n\left[\Lambda^j\int_{\Omega}\nabla\varphi_i\cdot\nabla\varphi_j\right] = 0\\ \frac{\partial L}{\partial \Lambda^j} =& -\sum_{i=0}^n\left[U^i\int_{\Omega}\nabla\varphi_i\cdot\nabla\varphi_j\right] + \sum_{k=0}^m C^k\int_{\Omega}\varphi_j f_k = 0\\ \frac{\partial L}{\partial C^k} =& \sum_{j=0}^n \Lambda^j\int_{\Omega}\varphi_j f_k = 0.\\ \end{align*}
The summations are suggestive of matrix-vector multiplication, and in fact we can write the equations in such a form. Take \((\mathcal{M}_{i, j}) = \int_\Omega\varphi_i\varphi_j\), \((\mathcal{N}_{i, j}) = \int_\Omega \nabla\varphi_i\cdot\nabla\varphi_j\), \((\overline{\mathcal{U}}_i) = \int_\Omega\varphi_i\overline{u}\), and \((\mathcal{F}_{j, k}) = \int_\Omega\varphi_j f_k\), and note that \(\mathcal{F}\) is an \(n\times m\) matrix. Then we get the matrix system
\begin{align*} \mathcal{M}U - \mathcal{U} - \mathcal{N}^T\Lambda =& 0\\ -\mathcal{N}U + \mathcal{F}^TC =& 0\\ \mathcal{F}\Lambda =& 0. \end{align*}
Moving known quantities to the right-hand-side and combinging the matrix equations gives us an idealized system to solve:
\begin{align*} \left(\begin{array}{c c c} \mathcal{M} & -\mathcal{N}^T & 0\\ -\mathcal{N} & 0 & \mathcal{F}^T\\ 0 & \mathcal{F} & 0 \end{array}\right) \left(\begin{array}{c} U\\ \Lambda \\ C \end{array}\right) = \left(\begin{array}{c} \overline{\mathcal{U}}\\0\\0 \end{array}\right). \end{align*}
Unfortunately, setting up this type of block matrix, where some of the dofs couple in a nonlocal way, does not work well with deal.II's internal structure. So, we will need to modify how this problem is constructed to work well with deal.II, which is discussed below.
The straightforward way to deal with the equations above is to define an FESystem object with two components, one for the temperature \(u(\mathbf x)\) and one for the Lagrange multiplier \(\lambda(\mathbf x)\). This would work in much the same way as you define system elements in all other vector valued problems. You will then end up with \(N_U\) variables for the finite element approximation of the temperature, and \(N_\Lambda\) variables for the finite element approximation of the Lagrange multiplier, and all of these \(N_U+N_\Lambda\) unknowns live on the triangulation in the form of a DoFHandler object. While in principle \(N_U\) and \(N_\Lambda\) can be different (if you chose different element types, or different polynomial degrees to discretize \(u\) and \(\lambda\)), in practice it is best to keep them the same, so we have that \(N_U = N_\Lambda = n\).
Now the problem is that what you really want is a linear system of \(N=n + n + m\) unknowns that includes the \(m = 4\) degrees of freedom for the source strengths we are trying to optimize for: We need a matrix of size \(N\times N\), and solution and right hand side vectors of size \(N\). Moreover, if you try to build a block linear system because that's how your linear solver is structured, you probably want a \(3\times 3\) block structure for the matrix, and three blocks for the vectors, even though the finite element and the DoFHandler only know of two solution components.
Conceptually, of course, there is nothing wrong with creating matrices and vectors that have sizes that don't match the number of unknowns in a DoFHandler. You just need a way to translate between the index of a degree of freedom in a DoFHandler and the corresponding row and column in a linear system. Here, perhaps one would make the choice that the mesh-based degrees of freedom (which the DoFHandler numbers from zero to \(n+n-1\)) come first, and the four \(C_k\) additional scalar unknowns come last. However, the fact that we need to spell out such a convention should give you a hint that something with this kind of design is wrong. Most codes which would use a convention like this will, almost certainly, leave this convention implicit (something for the programer to know about) rather than enforcing it explicitly (say, in the form of a function mesh_dof_to_linear_system_index()
). It is then very easy for a number of bugs to emerge. For example, we can not renumber all degrees of freedom with the functions in DoFRenumbering in arbitrary ways if we want the \(m\) additional variables always at the end.
Precisely because it is so easy to make mistakes and because it is so restricting to have implicit conventions like "Yes, your matrix can be larger than the number of DoFs in the DoFHandler; but it can't be smaller", deal.II a long time ago made the decision that a lot of functions test that the sizes of vectors and matrices are equal to the number of unknowns in a DoFHandler. For example, DoFTools::make_sparsity_pattern() takes a DoFHandler and expects that the sparsity pattern has as many rows and columns as there are DoFs in the DoFHandler. So does DoFTools::make_hanging_node_constraints(). VectorTools::interpolate_boundary_values() uses the same convention. Many many other functions do too, and they all in fact check that this is true and will terminate the program if it is not. This is because one can find many many bugs in codes this way, and it also allows for an assumption that is so obvious that we have never stated it so far: The degree of freedom with index \(i\) corresponds to row \(i\) of the linear system and column \(i\) of the matrix. If we allowed sizes to differ, we would need to have a way to make this connection. Further, that way would need to be be explicit (by way of a function that translates from DoF index to row and column, as already mentioned above) or there would be no end to the possibility of introducing bugs.
In other words, the fact that we want to have \(2n\) degrees of freedom in the DoFHandler, but matrices and vectors with \(2n + m\) rows and columns does not work with deal.II's underlying design decisions. Creating matrices and vectors that happen to be larger is going to trigger lots of assertions, and for good reasons.
So then how are we going to address this? At its core, what needs to happen is that the DoFHandler knows about the additional degrees of freedom. The question is how we are going to teach it about these "nonlocal" degrees of freedom – they are, after all, not ones that are associated with vertices, edges, faces, or cells of the mesh as the usual finite element degrees of freedom are. The approach we are going to use is not what one would generally call "obvious" or "natural", but it does fit into the infrastructure deal.II provides (even though this infrastructure was originally built for other purposes). In fact, at the time this program is written in 2024, there is substantial debate on whether we can implement better ways to achieve the same result that do not require this kind of "abuse" of features intended for other purposes; if consensus is ever reached on these alternative approaches, this program may also be converted.
Whatever the future, let us turn to how our goal can be achieved today. The key to the following is that deal.II (i) allows using different finite elements on different cells, originally intended to support \(hp\) finite elements; and (ii) has elements that have no degrees of freedom. We have used these features in other tutorial programs: step-27 and step-75 make use of feature (i) to implement the \(hp\) finite element method, and step-46 makes use of feature (ii) for multiphysics problems where some variables live only on some cells.
We will (ab)use the general approach used in step-46 for our purposes here. Specifically, on the first four cells of the mesh, we will define a finite element that has three components: a temperature component and a Lagrange multiplier component (both of which are the usual FE_Q element), plus a third vector component that is piecewise constant (FE_DGQ(0)). On all other cells, the finite element used has the same temperature and Lagrange multiplier components, but the third component is of type FE_Nothing and so has no degrees of freedom at all. Note that with this setup, the first four cells will have one additional degree of freedom. The end result of this approach is that nearly every function you have ever used will work as one would expect. In particular:
What we have with this setup is a good way to not break things in deal.II. However, it is not so natural that we can use the full extent of features in deal.II without issue. In particular, to the DoFHandler the nonlocal degrees of freedom only live on one cell each. On the other hand, from the perspective of the PDE, we need these nonlocal DoFs on all cells that intersect the areas in which the heating is applied. So, we cannot use cell->get_dof_indices()
to find DoF indices for the third vector component on most cells, because the DoF handler does not see that these dofs reach beyond the cell they are defined on. As a consequence, we will have to do some extra work when building sparsity patterns, and later when building the system matrix/system right hand side.
There are four places in the standard finite element code where we will need to take special care: assigning FE_DGQ elements, constructing the sparsity pattern, constructing the system matrix/system right hand side, and outputting results.
setup_system()
, before dofs are distributed. Since we are using FE_DGQ elements, we don't need to worry about boundary conditions affecting the nonlocal dofs. We simply enumerate the cells until we have collected enough dofs, which in this program depends on the dimension of the problem. Each time we encounter a new cell, we change the hp finite element from an FESystem with a 3rd FE_Nothing component to an FESystem with a third FE_DGQ component. Calling DoFHandler::distribute_dofs() after this is all we need to do to have the system set up correctly. Next, we need to grab the indices of the FE_DGQ dofs. We do this by calling DoFTools::extract_dofs(), using a component mask to pick just the third component of the FESystem. We store these indices in a standard vector, and use them for the three parts below.dof_index
, nonlocal_dof_index
) and (nonlocal_dof_index
, dof_index
). Note that we need to add two entries because it doesn't matter whether the nonlocal dof is the row or column index, the entry could be nonzero. In this example, we further refine this by only adding entries where the \(\Lambda\) component of the FESystem is nonzero, because we only deal with the nonlocal functions in the equations stemming from the \(\Lambda^j\) and \(C^k\) derivatives.To see how we actually do these things, let's look at the code.
We start by defining a function class for \(\bar u\); it represents the heat profile we want to match.
This class has two member variables:
center
: A Point object representing the center of the functionradius
: A double representing the radius of the circular step, or the standard deviation of a GaussianNext, we define an overloaded constructor for TargetFunction, so we can choose to set the center and radius
Parameters:
center
: A constant Point pointer used to set the member variable centerradius
: A constant double used to set the member variable radiusThe value() function returns the value of the function at point p
and component index component
. In this case, if the component corresponds to the solution \(u\), then the function returns a value based on either a step function or a Gaussian. If the component corresponds to any of the other variables, the function always returns 0.
Note: In the documentation we discuss using a Gaussian target function, but this is not included in the code for readability. To run the code with a Gaussian target function, simply replace the code in the component == 0
braces by
return std::exp(-((p-center).norm()*(p-center).norm())/(radius*radius));
The next class we define is one for a circular indicator function. Unlike the target function, this does not need a component argument because we have to manually address where it gets used in the code. Objects of this function type correspond to the nonlocal dofs.
Parameters:
center
: A constant Point object giving the center of the indicator region.radius
: The radius of the region.The main class is very similar to step-4 in structure, given that this is a relatively simple program that does not use adaptive mesh refinement. However, there are three new member variables:
nonlocal_dofs
: A std::vector
of dof indices that stores the dof index for the nonlocal dofs.heat_functions
: A std::vector
of CircularIndicatorFunction objects; these are the heat sources.target_function
: This is the function we want to match. We store it as a class variable because it is used both in assemble_system() and output_results().In this constructor, we set up several of the fundamental data structures this program needs. Specifically, we generate the finite element collection, which is basically a list of all the possible finite elements we could use on each cell. This collection has two elements: one FESystem that has two degree 2 FE_Q elements and one FE_Nothing element (to be used on all cells that are not "special"), and one FESystem that has two degree 2 FE_Q elements and one degree 0 FE_DGQ element (to be used on those "special" cells we use to anchor the non-local degrees of freedom, as discussed in the introduction).
Where we have a collection of elements, we then also need a collection of quadratures – which here has only a single element because we can use the same quadrature on all cells.
The default constructor also initializes dof_handler and target_function, and it generates a vector of CircularIndicatorFunctions objects which represent the heat functions.
Here, we generate the finite element collection, which is basically a list of all the possible finite elements we could use on each cell. This collection has two elements: one FESystem that has two degree 2 FE_Q elements and one FE_Nothing element, and one FESystem that has two degree 2 FE_Q elements and one degree 0 FE_DGQ element.
The quadrature collection is just one degree 3 QGauss element.
Here, we choose the center points for the heat functions to be the vertices of a lattice, in this case the corners of a hypercube centered at 0, with side length 1. The block of code below then creates the CircularIndicatorFunction objects for the main class. To make these, we check the dimension of the problem and create 2, 4, or 8 function objects.
The make_grid() function makes a hypercube grid, see step-4:
The setup_system()
function is similar to step-4, except we have to add a few steps to prepare for the nonlocal dofs.
Here, we loop over the cells and set the FESystem index to 1, which corresponds to the system with 2 FE_Q elements and one FE_DGQ element. We do this until we have enough dofs for each heat function. Note that we use the global active cell index to measure when to stop the loop. This allows the loop to be run in parallel with no alteration. Then, we call DoFHandler::distribute_dofs() to actually enumerate all degrees of freedom.
Once we've assigned dofs, the code block below counts the number of dofs in the system, and outputs to the console. In other contexts, we might want to use block matrices (see, for example, step-20 or step-22) to build more efficient linear solvers; here, we will just put everything into one big matrix and so knowing the number of unknowns for each of the variables is purely for informational purposes.
Finally, we need to extract the indices of the finite elements which correspond to the non-local dofs.
First, we make a component mask, which is false
except for the third component. This will extract only the dofs from the third component of the FE system. Next, we actually extract the dofs, and store them in an IndexSet variable. Finally, we add each extracted index to the member array nonlocal_dofs
.
The mesh we created above is not locally refined, and so there are no hanging node constraints to keep track of. But it does not hurt to just use the same setup we have used starting in step-6 of building a constraints object that contains hanging node constraints (anticipating that perhaps we'd want to do adaptive mesh refinement in a later step) into which we then also put the constraints for boundary values on both the \(u\) and \(\lambda\) variables.
Because the nonlocal degrees of freedom use discontinuous elements, they do not contribute to boundary values (discontinuous elements do not have degrees of freedom logically located on the boundary that could be interpolated) and we do not need to exclude these solution components explicitly when calling VectorTools::interpolate_boundary_values()
The remainder of the function deals with building the sparsity pattern. It consists of two parts: The entries that result from the usual integration of products of shape functions, and then the entries that result from integrals that contain nonlocal degrees of freedom (which one can think of as associated with shape functions that are constant across the entire domain). The first part is easily built using standard tools:
The other part is more awkward. We have matrix entries that result from terms such as \(\int_{\Omega}\varphi_j f_k\) where each \(\varphi_j\) is a shape function associated to \(\lambda\) and each \(f_k\) is a characteristic function of a part of the domain. As a consequence, we end up with a nonzero matrix entry \(A_{jk}\) (along with its transpose \(A_{kj}\)) if there is overlap between \(\varphi_j\) and \(f_k\). In practice, because we will use quadrature, this means that we end up with a quadrature point on a cell on which \(\varphi_j\) lives and at which \(f_k\) is not zero. (We will implicitly assume that a shape function that lives on the current cell is nonzero at all quadrature points – an assumption that is generally true unless one chooses specific quadrature formulas.) Determining which sparsity pattern entries we need to add then essentially comes down to "simulating" what would happen if we actually computed integrals and which matrix entries would end up being non-zero in the process. As a consequence, the following code's general structure looks very similar to what we will do for the nonlocal contributions in the assemble_system()
function below.
To get this started, we create an hp_fe_values
that we will only use to query the quadrature point locations. The non-local dofs will need to interact with the second component of the fe system (namely, \(\lambda\)), so we also declare a variable that will help us extract this scalar field for use below.
Then, we loop over the cells, then over the quadrature points, and finally over the indices, as if we were constructing a mass matrix. However, what we instead do here is check two things. First, we check if the quadrature point is within the radius of a circular indicator function that represents our non-local dof. If so then we add an entry to the sparse matrix at the (nonlocal dof index, lambda dof index) entry and the (lambda dof index, nonlocal dof index) entry for all lambda degrees of freedom. (Because the matrix we solve with has both the lambda-nonlocal interacting block and its transpose, we need to add two entries each time.)
The rest (below) is standard setup code, see step-4:
The assemble_system()
function works very similar to how is does in other tutorial programs (cf. step-4, step-6, step-8, and for the vector-valued case see step-22). However, there is an additional component to constructing the system matrix, because we need to handle the nonlocal dofs manually.
First, we do a standard loop setup for constructing the system matrix.
In the loop over quadrature points, we start by building all of the usual terms that are bilinear in shape functions corresponding to the \(u\) and \(\lambda\) variables:
For the integrals that involve the nonlocal dofs, we make use of the quadrature point again. To compute the integrals, we loop over the heat functions, adding the numeric integral of each heat equation with each \(\lambda\) shape function, at the appropriate indices (which we found in setup_system()
). Note that if we try to add 0 to a matrix entry we have not previously indicated should be nonzero, there will not be a problem; but if we try to add a nonzero value to an entry not previously added to the sparsity pattern, we will get an error. In other words, the following lines of the code check that we adjusted the sparsity pattern correctly in the previous function.
Finally, we copy the local contributions to the linear system into the global matrix and right hand side vector, taking into account hanging node and boundary values constraints:
The solve() function works similar to how it is done in step-6 and step-8, except we need to use a different solver because the linear problem we are trying to solve is a saddle point problem for which the Conjugate Gradient algorithm is not applicable. But, because the matrix is symmetric, we can use SolverMinRes, an iterative solver specialized for symmetric indefinite problems. This solver could be improved with the use of preconditioners, but we don't do that here for simplicity (see the Possibilities for Extensions section below).
As you will see in the output, given that we are not using a preconditioner, we need a lot of iterations to solve this linear system. We set the maximum to one million, more than we need of course, but an indication that this is not an efficient solver. For smaller problems, one can also use a direct solver (see step-29) for which you would just replace the main part of this function by the following three lines of code:
The output_results()
function is a bit more robust for this program than is typical. This is because, in order to visualize the heat sources we have optimized, we need to do extra work and interpolate them onto a mesh. We do this by instantiating a new DoFHandler object and then using the helper function VectorTools::interpolate().
The top of the function is as always when using vector-valued elements (see, for example, step-22) and simply outputs all of the solution variables on the mesh cells they are defined on:
The non-local degrees of freedom are of course defined on the first several cells, but at least logically are considered to live everywhere (or, if you prefer, nowhere at all since they do not represent spatial functions). But, conceptually, we use them as multipliers for the heat sources, and so while the coefficients \(C^k\) are non-local, the heat source \(\sum_k C^k f_k(\mathbf x)\) is a spatially variable function. It would be nice if we could visualize that as well. The same is true for the target heat distribution \(\bar u\) we are trying to match.
To do so, we create a new dof handler to output the target function and heat plate values, and associate it with a finite element with a degree that matches what we used to solve for \(u\) and \(\lambda\), although in reality this is an arbitrary choice:
To get started with the visualization, we need a vector which stores the interpolated target function. We create the vector, interpolate the target function \(\bar u\) onto the mesh, then add the data to our data_out object.
In order to visualize the sum of the heat sources \(\sum_k C^k f_k(\mathbf x)\), we create a vector which will store the interpolated values of this function. Then, we loop through the heat functions, create a vector to store the interpolated data, call the VectorTools::interpolate() function to fill the vector, multiply the interpolated data by the nonlocal dof value \(C^k\) (so that the heat plate is set to the correct temperature), and then add this data to the sum of heat sources. Because we can, we also add the vector for each source individually to the DataOut object, so that they can be visualized individually.
Once all the heat functions have been combined, we add them to the data_out object, and output everything into a file :
Finally, we output the nonlocal coefficient values to the console:
The run() function runs through each step of the program, nothing new here:
The main()
function looks essentially like that of most other tutorial programs.
When you run the program with the step target function (in 2D), the output looks something like this:
When you run the program with the Gaussian target function (in 2D), the output should look like this:
The goal of this program is to determine which temperature settings best match the target function, so first let's see what these targets look like:
![]() | ![]() |
After solving the Lagrangian system, we arrive at solutions \(U_\text{step}\) and \(U_\text{gauss}\) that look like this:
![]() | ![]() |
Notice that \(U_\text{gauss}\) matches the target much better than \(U_\text{step}\). Intuitively, this makes sense: in general, solutions to the heat equation look something like Gaussians, so the Gaussian target function is a much more "natural" thing to match than a sharp step function. We can also see this in the optimal heat profiles.
![]() | ![]() |
Notice that for the Gaussian target, the 4 plates are set to less extreme values. In contrast, to try to match the step function, higher and lower temperatures must be applied.
While it does not contain much useful information, we can also plot the Lagrange multiplier \(\Lambda\), which has an interesting shape:
![]() | ![]() |
There are a few ways that this program could be extended, which we list below.
\begin{align*} \left(\begin{array}{c c c} \mathcal{M} & -\mathcal{N}^T & 0\\ -\mathcal{N} & 0 & \mathcal{F}^T\\ 0 & \mathcal{F} & 0 \end{array}\right) \end{align*}
can often individually be solved with quite efficiently; for example, \(\mathcal{M}\) is a mass matrix that is easily solved with using a CG iteration, and \(\mathcal N\) is a Laplace matrix for which CG with a geometric or algebraic multigrid preconditioner is very effective. Using the ideas of step-20 and step-22, we should then create a \(3\times 3\) block preconditioner in which some blocks correspond to the inverses of \(\mathcal{M}\) or \(\mathcal{N}\), or some kind of Schur complement. A starting point for this kind of consideration is [17] .