deal.II version GIT relicensing-2287-g6548a49e0a 2024-12-20 18:30:00+00:00
|
This tutorial depends on step-6.
Table of contents | |
---|---|
This program was contributed by Luca Heltai and Giovanni Alzetta, SISSA, Trieste.
In this tutorial we consider the case of two domains, \(\Omega\) in \(R^{\text{spacedim}}\) and \(\Gamma\) in \(R^{\text{dim}}\), where \(\Gamma\) is embedded in \(\Omega\) ( \(\Gamma \subseteq \Omega\)). We want to solve a partial differential equation on \(\Omega\), enforcing some conditions on the solution of the problem on the embedded domain \(\Gamma\).
There are two interesting scenarios:
dim
of the embedded domain \(\Gamma\) is the same of the domain \(\Omega\) (spacedim
), that is, the spacedim-dimensional measure of \(\Gamma\) is not zero, ordim
which is smaller than that of \(\Omega\) (spacedim
), thus its spacedim-dimensional measure is zero; for example it is a curve embedded in a two dimensional domain, or a surface embedded in a three-dimensional domain.In both cases define the restriction operator \(\gamma\) as the operator that, given a continuous function on \(\Omega\), returns its (continuous) restriction on \(\Gamma\), i.e.,
\[ \gamma : C^0(\Omega) \mapsto C^0(\Gamma), \quad \text{ s.t. } \gamma u = u|_{\Gamma} \in C^0(\Gamma), \quad \forall u \in C^0(\Omega). \]
It is well known that the operator \(\gamma\) can be extended to a continuous operator on \(H^1(\Omega)\), mapping functions in \(H^1(\Omega)\) to functions in \(H^1(\Gamma)\) when the intrinsic dimension of \(\Gamma\) is the same of \(\Omega\).
The same is true, with a less regular range space (namely \(H^{1/2}(\Gamma)\)), when the dimension of \(\Gamma\) is one less with respect to \(\Omega\), and \(\Gamma\) does not have a boundary. In this second case, the operator \(\gamma\) is also known as the trace operator, and it is well defined for Lipschitz co-dimension one curves and surfaces \(\Gamma\) embedded in \(\Omega\) (read this wikipedia article for further details on the trace operator).
The co-dimension two case is a little more complicated, and in general it is not possible to construct a continuous trace operator, not even from \(H^1(\Omega)\) to \(L^2(\Gamma)\), when the dimension of \(\Gamma\) is zero or one respectively in two and three dimensions.
In this tutorial program we're not interested in further details on \(\gamma\): we take the extension \(\gamma\) for granted, assuming that the dimension of the embedded domain (dim
) is always smaller by one or equal with respect to the dimension of the embedding domain \(\Omega\) (spacedim
).
We are going to solve the following differential problem: given a sufficiently regular function \(g\) on \(\Gamma\), a forcing term \(f \in L^2(\Omega)\) and a Dirichlet boundary condition \(u_D\) on \(\partial \Omega\), find the solution \(u\) to
\begin{eqnarray*} - \Delta u + \gamma^T \lambda &=& f \text{ in } \Omega\\ \gamma u &=& g \text{ in } \Gamma \\ u & = & u_D \text{ on } \partial\Omega. \end{eqnarray*}
This is a constrained problem, where we are looking for a function \(u\) that solves the Poisson equation and that satisfies Dirichlet boundary conditions \(u=u_D\) on \(\partial \Omega\), subject to the constraint \(\gamma u = g\) using a Lagrange multiplier.
When \(f=0\) this problem has a physical interpretation: harmonic functions, i.e., functions that satisfy the Laplace equation, can be thought of as the displacements of a membrane whose boundary values are prescribed. The current situation then corresponds to finding the shape of a membrane for which not only the displacement at the boundary, but also on \(\Gamma\) is prescribed. For example, if \(\Gamma\) is a closed curve in 2d space, then that would model a soap film that is held in place by a wire loop along \(\partial \Omega\) as well as a second loop along \(\Gamma\). In cases where \(\Gamma\) is a whole area, you can think of this as a membrane that is stretched over an obstacle where \(\Gamma\) is the contact area. (If the contact area is not known we have a different problem – called the "obstacle problem" – which is modeled in step-41.)
As a first example we study the zero Dirichlet boundary condition on \(\partial\Omega\). The same equations apply if we apply zero Neumann boundary conditions on \(\partial\Omega\) or a mix of the two.
The variational formulation can be derived by introducing two infinite dimensional spaces \(V(\Omega)\) and \(Q^*(\Gamma)\), respectively for the solution \(u\) and for the Lagrange multiplier \(\lambda\).
Multiplying the first equation by \(v \in V(\Omega)\) and the second by \(q \in Q(\Gamma)\), integrating by parts when possible, and exploiting the boundary conditions on \(\partial\Omega\), we obtain the following variational problem:
Given a sufficiently regular function \(g\) on \(\Gamma\), find the solution \(u\) to
\begin{eqnarray*} (\nabla u, \nabla v)_{\Omega} + (\lambda, \gamma v)_{\Gamma} &=& (f,v)_{\Omega} \qquad \forall v \in V(\Omega) \\ (\gamma u, q)_{\Gamma} &=& (g,q)_{\Gamma} \qquad \forall q \in Q(\Gamma), \end{eqnarray*}
where \((\cdot, \cdot)_{\Omega}\) and \((\cdot, \cdot)_{\Gamma}\) represent, respectively, \(L^2\) scalar products in \(\Omega\) and in \(\Gamma\).
Inspection of the variational formulation tells us that the space \(V(\Omega)\) can be taken to be \(H^1_0(\Omega)\). The space \(Q(\Gamma)\), in the co-dimension zero case, should be taken as \(H^1(\Gamma)\), while in the co-dimension one case should be taken as \(H^{1/2}(\Gamma)\).
The function \(g\) should therefore be either in \(H^1(\Gamma)\) (for the co-dimension zero case) or \(H^{1/2}(\Gamma)\) (for the co-dimension one case). This leaves us with a Lagrange multiplier \(\lambda\) in \(Q^*(\Gamma)\), which is either \(H^{-1}(\Gamma)\) or \(H^{-1/2}(\Gamma)\).
There are two options for the discretization of the problem above. One could choose matching discretizations, where the Triangulation for \(\Gamma\) is aligned with the Triangulation for \(\Omega\), or one could choose to discretize the two domains in a completely independent way.
The first option is clearly more indicated for the simple problem we proposed above: it is sufficient to use a single Triangulation for \(\Omega\) and then impose certain constraints depending \(\Gamma\). An example of this approach is studied in step-40, where the solution has to stay above an obstacle and this is achieved imposing constraints on \(\Omega\).
To solve more complex problems, for example one where the domain \(\Gamma\) is time dependent, the second option could be a more viable solution. Handling non aligned meshes is complex by itself: to illustrate how is done we study a simple problem.
The technique we describe here is presented in the literature using one of many names: the immersed finite element method, the fictitious boundary method, the distributed Lagrange multiplier method, and others. The main principle is that the discretization of the two grids and of the two finite element spaces are kept completely independent. This technique is particularly efficient for the simulation of fluid-structure interaction problems, where the configuration of the embedded structure is part of the problem itself, and one solves a (possibly non-linear) elastic problem to determine the (time dependent) configuration of \(\Gamma\), and a (possibly non-linear) flow problem in \(\Omega \setminus \Gamma\), plus coupling conditions on the interface between the fluid and the solid.
In this tutorial program we keep things a little simpler, and we assume that the configuration of the embedded domain is given in one of two possible ways:
We define the embedded reference domain \(\Gamma_0\) embedded_grid
: on this triangulation we construct a finite dimensional space (embedded_configuration_dh
) to describe either the deformation or the displacement through a FiniteElement system of FE_Q objects (embedded_configuration_fe
). This finite dimensional space is used only to interpolate a user supplied function (embedded_configuration_function
) representing either \(\psi\) (if the parameter use_displacement
is set to false
) or \(\delta\psi\) (if the parameter use_displacement
is set to true
).
The Lagrange multiplier \(\lambda\) and the user supplied function \(g\) are defined through another finite dimensional space embedded_dh
, and through another FiniteElement embedded_fe
, using the same reference domain. In order to take into account the deformation of the domain, either a MappingFEField or a MappingQEulerian object are initialized with the embedded_configuration
vector.
In the embedding space, a standard finite dimensional space space_dh
is constructed on the embedding grid space_grid
, using the FiniteElement space_fe
, following almost verbatim the approach taken in step-6.
We represent the discretizations of the spaces \(V\) and \(Q\) with
\[ V_h(\Omega) = \text{span} \{v_i\}_{i=1}^n \]
and
\[ Q_h(\Gamma) = \text{span} \{q_i\}_{i=1}^m \]
respectively, where \(n\) is the dimension of space_dh
, and \(m\) the dimension of embedded_dh
.
Once all the finite dimensional spaces are defined, the variational formulation of the problem above leaves us with the following finite dimensional system of equations:
\[ \begin{pmatrix} K & C^T \\ C & 0 \end{pmatrix} \begin{pmatrix} u \\ \lambda \end{pmatrix} = \begin{pmatrix} F \\ G \end{pmatrix} \]
where
\begin{eqnarray*} K_{ij} &\dealcoloneq& (\nabla v_j, \nabla v_i)_\Omega \qquad i,j=1,\dots,n \\ C_{\alpha j} &\dealcoloneq& (v_j, q_\alpha)_\Gamma \qquad j=1,\dots,n, \alpha = 1,\dots, m \\\\ F_{i} &\dealcoloneq& (f, v_i)_\Omega \qquad i=1,\dots,n \\ G_{\alpha} &\dealcoloneq& (g, q_\alpha)_\Gamma \qquad \alpha = 1,\dots, m. \end{eqnarray*}
While the matrix \(K\) is the standard stiffness matrix for the Poisson problem on \(\Omega\), and the vector \(G\) is a standard right-hand-side vector for a finite element problem with forcing term \(g\) on \(\Gamma\), (see, for example, step-3), the matrix \(C\) or its transpose \(C^T\) are non-standard since they couple information on two non-matching grids.
In particular, the integral that appears in the computation of a single entry of \(C\), is computed on \(\Gamma\). As usual in finite elements we split this integral into contributions from all cells of the triangulation used to discretize \(\Gamma\), we transform the integral on \(K\) to an integral on the reference element \(\hat K\), where \(F_{K}\) is the mapping from \(\hat K\) to \(K\), and compute the integral on \(\hat K\) using a quadrature formula:
\[ C_{\alpha j} \dealcoloneq (v_j, q_\alpha)_\Gamma = \sum_{K\in \Gamma} \int_{\hat K} \hat q_\alpha(\hat x) (v_j \circ F_{K}) (\hat x) J_K (\hat x) \mathrm{d} \hat x = \sum_{K\in \Gamma} \sum_{i=1}^{n_q} \big(\hat q_\alpha(\hat x_i) (v_j \circ F_{K}) (\hat x_i) J_K (\hat x_i) w_i \big) \]
Computing this sum is non-trivial because we have to evaluate \((v_j \circ F_{K}) (\hat x_i)\). In general, if \(\Gamma\) and \(\Omega\) are not aligned, the point \(F_{K}(\hat x_i)\) is completely arbitrary with respect to \(\Omega\), and unless we figure out a way to interpolate all basis functions of \(V_h(\Omega)\) on an arbitrary point on \(\Omega\), we cannot compute the integral needed for an entry of the matrix \(C\).
To evaluate \((v_j \circ F_{K}) (\hat x_i)\) the following steps needs to be taken (as shown in the picture below):
The three steps above can be computed by calling, in turn,
This is what the deal.II function VectorTools::point_value() does when evaluating a finite element field (not just a single shape function) at an arbitrary point; but this would be inefficient in this case.
A better solution is to use a convenient wrapper to perform the first three steps on a collection of points: GridTools::compute_point_locations(). If one is actually interested in computing the full coupling matrix, then it is possible to call the method NonMatching::create_coupling_mass_matrix(), that performs the above steps in an efficient way, reusing all possible data structures, and gathering expensive steps together. This is the function we'll be using later in this tutorial.
We solve the final saddle point problem by an iterative solver, applied to the Schur complement \(S\) (whose construction is described, for example, in step-20), and we construct \(S\) using LinearOperator classes.
The problem we solve here is identical to step-4, with the difference that we impose some constraints on an embedded domain \(\Gamma\). The tutorial is written in a dimension independent way, and in the results section we show how to vary both dim
and spacedim
.
The tutorial is compiled for dim
equal to one and spacedim
equal to two. If you want to run the program in embedding dimension spacedim
equal to three, you will most likely want to change the reference domain for \(\Gamma\) to be, for example, something you read from file, or a closed sphere that you later deform to something more interesting.
In the default scenario, \(\Gamma\) has co-dimension one, and this tutorial program implements the Fictitious Boundary Method. As it turns out, the same techniques are used in the Variational Immersed Finite Element Method, and the coupling operator \(C\) defined above is the same in almost all of these non-matching methods.
The embedded domain is assumed to be included in \(\Omega\), which we take as the unit square \([0,1]^2\). The definition of the fictitious domain \(\Gamma\) can be modified through the parameter file, and can be given as a mapping from the reference interval \([0,1]\) to a curve in \(\Omega\).
If the curve is closed, then the results will be similar to running the same problem on a grid whose boundary is \(\Gamma\). The program will happily run also with a non-closed \(\Gamma\), although in those cases the mathematical formulation of the problem is more difficult, since \(\Gamma\) will have a boundary by itself that has co-dimension two with respect to the domain \(\Omega\).
Glowinski, R., T.-W. Pan, T.I. Hesla, and D.D. Joseph. 1999. “A Distributed Lagrange Multiplier/fictitious Domain Method for Particulate Flows.” International Journal of Multiphase Flow 25 (5). Pergamon: 755–94.
Boffi, D., L. Gastaldi, L. Heltai, and C.S. Peskin. 2008. “On the Hyper-Elastic Formulation of the Immersed Boundary Method.” Computer Methods in Applied Mechanics and Engineering 197 (25–28).
Most of these have been introduced elsewhere, we'll comment only on the new ones.
The parameter acceptor class is the first novelty of this tutorial program: in general parameter files are used to steer the execution of a program at run time. While even a simple approach saves compile time, as the same executable can be run with different parameter settings, it can become difficult to handle hundreds of parameters simultaneously while maintaining compatibility between different programs. This is where the class ParameterAcceptor proves useful.
This class is used to define a public interface for classes that want to use a single global ParameterHandler to handle parameters. The class provides a static ParameterHandler member, namely ParameterAcceptor::prm, and implements the "Command design pattern" (see, for example, E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley Professional, 1994. https://goo.gl/FNYByc).
ParameterAcceptor provides a global subscription mechanism. Whenever an object of a class derived from ParameterAcceptor is constructed, a pointer to that object-of-derived-type is registered, together with a section entry in the parameter file. Such registry is traversed upon invocation of the single function ParameterAcceptor::initialize("file.prm") which in turn makes sure that all classes stored in the global registry declare the parameters they will be using, and after having declared them, it reads the content of file.prm
to parse the actual parameters.
If you call the method ParameterHandler::add_parameter for each of the parameters you want to use in your code, there is nothing else you need to do. If you are using an already existing class that provides the two functions declare_parameters
and parse_parameters
, you can still use ParameterAcceptor, by encapsulating the existing class into a ParameterAcceptorProxy class.
In this example, we'll use both strategies, using ParameterAcceptorProxy for deal.II classes, and deriving our own parameter classes directly from ParameterAcceptor.
The other new include file is the one that contains the GridTools::Cache class. The structure of deal.II, as many modern numerical libraries, is organized following a Directed Acyclic Graph (DAG). A DAG is a directed graph with topological ordering: each node structurally represents an object, and is connected to non-root nodes by one (or more) oriented edges, from the parent to the child. The most significant example of this structure is the Triangulation and its Triangulation::cell_iterator structure. From a Triangulation (the main node), we can access each cell (children nodes of the triangulation). From the cells themselves we can access over all vertices of the cell. In this simple example, the DAG structure can be represented as three node types (the triangulation, the cell iterator, and the vertex) connected by oriented edges from the triangulation to the cell iterators, and from the cell iterator to the vertices. This has several advantages, but it intrinsically creates “asymmetries”, making certain operations fast and their inverse very slow: finding the vertices of a cell has low computational cost, and can be done by simply traversing the DAG, while finding all the cells that share a vertex requires a non-trivial computation unless a new DAG data structure is added that represents the inverse search.
Since inverse operations are usually not needed in a finite element code, these are implemented in GridTools without the use of extra data structures related to the Triangulation which would make them much faster. One such data structure, for example, is a map from the vertices of a Triangulation to all cells that share those vertices, which would reduce the computations needed to answer to the previous question.
Some methods, for example GridTools::find_active_cell_around_point, make heavy usage of these non-standard operations. If you need to call these methods more than once, it becomes convenient to store those data structures somewhere. GridTools::Cache does exactly this, giving you access to previously computed objects, or computing them on the fly (and then storing them inside the class for later use), and making sure that whenever the Triangulation is updated, also the relevant data structures are recomputed.
In this example, we will be using a reference domain to describe an embedded Triangulation, deformed through a finite element vector field.
The next two include files contain the definition of two classes that can be used in these cases. MappingQEulerian allows one to describe a domain through a displacement field, based on a FESystem[FE_Q(p)^spacedim] finite element space. The second is a little more generic, and allows you to use arbitrary vector FiniteElement spaces, as long as they provide a continuous description of your domain. In this case, the description is done through the actual deformation field, rather than a displacement field.
Which one is used depends on how the user wants to specify the reference domain, and/or the actual configuration. We'll provide both options, and experiment a little in the results section of this tutorial program.
The parsed function class is another new entry. It allows one to create a Function object, starting from a string in a parameter file which is parsed into an object that you can use anywhere deal.II accepts a Function (for example, for interpolation, boundary conditions, etc.).
This is the last new entry for this tutorial program. The namespace NonMatching contains a few methods that are useful when performing computations on non-matching grids, or on curves that are not aligned with the underlying mesh.
We'll discuss its use in detail later on in the setup_coupling
method.
In the DistributedLagrangeProblem, we need two parameters describing the dimensions of the domain \(\Gamma\) (dim
) and of the domain \(\Omega\) (spacedim
).
These will be used to initialize a Triangulation<dim,spacedim> (for \(\Gamma\)) and a Triangulation<spacedim,spacedim> (for \(\Omega\)).
A novelty with respect to other tutorial programs is the heavy use of std::unique_ptr. These behave like classical pointers, with the advantage of doing automatic house-keeping: the contained object is automatically destroyed as soon as the unique_ptr goes out of scope, even if it is inside a container or there's an exception. Moreover it does not allow for duplicate pointers, which prevents ownership problems. We do this, because we want to be able to i) construct the problem, ii) read the parameters, and iii) initialize all objects according to what is specified in a parameter file.
We construct the parameters of our problem in the internal class Parameters
, derived from ParameterAcceptor. The DistributedLagrangeProblem
class takes a const reference to a Parameters
object, so that it is not possible to modify the parameters from within the DistributedLagrangeProblem class itself.
We could have initialized the parameters first, and then pass the parameters to the DistributedLagrangeProblem assuming all entries are set to the desired values, but this has two disadvantages:
DistributedLagrangeProblem
instead of inside the Parameters
.Here we assume that upon construction, the classes that build up our problem are not usable yet. Parsing the parameter file is what ensures we have all ingredients to build up our classes, and we design them so that if parsing fails, or is not executed, the run is aborted.
The Parameters
class is derived from ParameterAcceptor. This allows us to use the ParameterAcceptor::add_parameter() method in its constructor.
The members of this function are all non-const, but the DistributedLagrangeProblem
class takes a const reference to a Parameters
object: this ensures that parameters are not modified from within the DistributedLagrangeProblem
class.
The parameters now described can all be set externally using a parameter file: if no parameter file is present when running the executable, the program will create a "parameters.prm" file with the default values defined here, and then abort to give the user a chance to modify the parameters.prm file.
Initial refinement for the embedding grid, corresponding to the domain \(\Omega\).
The interaction between the embedded grid \(\Omega\) and the embedding grid \(\Gamma\) is handled through the computation of \(C\), which involves all cells of \(\Omega\) overlapping with parts of \(\Gamma\): a higher refinement of such cells might improve quality of our computations. For this reason we define delta_refinement
: if it is greater than zero, then we mark each cell of the space grid that contains a vertex of the embedded grid and its neighbors, execute the refinement, and repeat this process delta_refinement
times.
Starting refinement of the embedded grid, corresponding to the domain \(\Gamma\).
The list of boundary ids where we impose (possibly inhomogeneous) Dirichlet boundary conditions. On the remaining boundary ids (if any), we impose homogeneous Neumann boundary conditions. As a default problem we have zero Dirichlet boundary conditions on \(\partial \Omega\)
FiniteElement degree of the embedding space: \(V_h(\Omega)\)
FiniteElement degree of the embedded space: \(Q_h(\Gamma)\)
FiniteElement degree of the space used to describe the deformation of the embedded domain
Order of the quadrature formula used to integrate the coupling
If set to true, then the embedded configuration function is interpreted as a displacement function
Level of verbosity to use in the output
A flag to keep track if we were initialized or not
Entry point for the DistributedLagrangeProblem
Object containing the actual parameters
The following functions are similar to all other tutorial programs, with the exception that we now need to set up things for two different families of objects, namely the ones related to the embedding grids, and the ones related to the embedded one.
The only unconventional function we have here is the setup_coupling()
method, used to generate the sparsity pattern for the coupling matrix \(C\).
first we gather all the objects related to the embedding space geometry
Then the ones related to the embedded grid, with the DoFHandler associated to the Lagrange multiplier lambda
And finally, everything that is needed to deform the embedded triangulation
The ParameterAcceptorProxy class is a "transparent" wrapper derived from both ParameterAcceptor and the type passed as its template parameter. At construction, the arguments are split into two parts: the first argument is an std::string, forwarded to the ParameterAcceptor class, and containing the name of the section that should be used for this class, while all the remaining arguments are forwarded to the constructor of the templated type, in this case, to the Functions::ParsedFunction constructor.
This class allows you to use existing classes in conjunction with the ParameterAcceptor registration mechanism, provided that those classes have the members declare_parameters()
and parse_parameters()
.
This is the case here, making it fairly easy to exploit the Functions::ParsedFunction class: instead of requiring users to create new Function objects in their code for the RHS, boundary functions, etc., (like it is done in most of the other tutorials), here we allow the user to use deal.II interface to muParser (http://muparser.beltoforion.de), where the specification of the function is not done at compile time, but at run time, using a string that is parsed into an actual Function object.
In this case, the embedded_configuration_function
is a vector valued Function that can be interpreted as either a deformation or a displacement according to the boolean value of parameters.use_displacement
. The number of components is specified later on in the construction.
We do the same thing to specify the value of the forcing term \(f\). In this case the Function is a scalar one.
Then, we specify the value of the function \(g\), which is what we want our solution to be in the embedded space. Again, in this case the Function is a scalar one.
Finally, the value of the Dirichlet boundary conditions on \(\partial \Omega\) is specified.
Similarly to what we have done with the Functions::ParsedFunction class, we repeat the same for the ReductionControl class, allowing us to specify all possible stopping criteria for the Schur complement iterative solver we'll use later on.
Next we gather all SparsityPattern, SparseMatrix, and Vector objects we'll need
The TimerOutput class is used to provide some statistics on the performance of our program.
At construction time, we initialize also the ParameterAcceptor class, with the section name we want our problem to use when parsing the parameter file.
Parameter files can be organized into section/subsection/etc.: this has the advantage that defined objects share parameters when sharing the same section/subsection/etc. ParameterAcceptor allows to specify the section name using Unix conventions on paths. If the section name starts with a slash ("/"), then the section is interpreted as an absolute path, ParameterAcceptor enters a subsection for each directory in the path, using the last name it encountered as the landing subsection for the current class.
For example, if you construct your class using ParameterAcceptor("/first/second/third/My Class")
, the parameters will be organized as follows:
Internally, the current path stored in ParameterAcceptor is now considered to be "/first/second/third/", i.e. when you specify an absolute path, ParameterAcceptor changes the current section to the current path, i.e. to the path of the section name until the last "/".
You can now construct another class derived from ParameterAcceptor using a relative path (e.g., ParameterAcceptor("My Other Class")
) instead of the absolute one (e.g. ParameterAcceptor("/first/second/third/My Other
Class")
), obtaining:
If the section name ends with a slash then subsequent classes will interpret this as a full path: for example, similar to the one above, if we have two classes, one initialized with ParameterAcceptor("/first/second/third/My Class/")
and the other with ParameterAcceptor("My Other Class")
, then the resulting parameter file will look like:
We are going to exploit this, by making our Parameters
the parent of all subsequently constructed classes. Since most of the other classes are members of DistributedLagrangeProblem
this allows, for example, to construct two DistributedLagrangeProblem
for two different dimensions, without having conflicts in the parameters for the two problems.
The ParameterAcceptor::add_parameter() function does a few things:
In turn, ParameterAcceptor::prm.add_parameter
add_parameter()
by setting it to whatever was specified in the input file (of course, after the input file has been parsed and the text representation converted to the type of the variable).Once the parameter file has been parsed, then the parameters are good to go. Set the internal variable initialized
to true.
The constructor is pretty standard, with the exception of the ParameterAcceptorProxy
objects, as explained earlier.
Here is a way to set default values for a ParameterAcceptor class that was constructed using ParameterAcceptorProxy.
In this case, we set the default deformation of the embedded grid to be a circle with radius \(R\) and center \((Cx, Cy)\), we set the default value for the embedded_value_function to be the constant one, and specify some sensible values for the SolverControl object.
It is fundamental for \(\Gamma\) to be embedded: from the definition of \(C_{\alpha j}\) is clear that, if \(\Gamma \not\subseteq \Omega\), certain rows of the matrix \(C\) will be zero. This would be a problem, as the Schur complement method requires \(C\) to have full column rank.
The function DistributedLagrangeProblem::setup_grids_and_dofs()
is used to set up the finite element spaces. Notice how std::make_unique
is used to create objects wrapped inside std::unique_ptr
objects.
Initializing \(\Omega\): constructing the Triangulation and wrapping it into a std::unique_ptr
object
Next, we actually create the triangulation using GridGenerator::hyper_cube(). The last argument is set to true: this activates colorization (i.e., assigning different boundary indicators to different parts of the boundary), which we use to assign the Dirichlet and Neumann conditions.
Once we constructed a Triangulation, we refine it globally according to the specifications in the parameter file, and construct a GridTools::Cache with it.
The same is done with the embedded grid. Since the embedded grid is deformed, we first need to set up the deformation mapping. We do so in the following few lines:
Once we have defined a finite dimensional space for the deformation, we interpolate the embedded_configuration_function
defined in the parameter file :
Now we can interpret it according to what the user has specified in the parameter file: as a displacement, in which case we construct a mapping that displaces the position of each support point of our configuration finite element space by the specified amount on the corresponding configuration vector, or as an absolution position.
In the first case, the class MappingQEulerian offers its services, while in the second one, we'll use the class MappingFEField. They are in fact very similar. MappingQEulerian will only work for systems of FE_Q finite element spaces, where the displacement vector is stored in the first spacedim
components of the FESystem, and the degree given as a parameter at construction time, must match the degree of the first spacedim
components.
The class MappingFEField is slightly more general, in that it allows you to select arbitrary FiniteElement types when constructing your approximation. Naturally some choices may (or may not) make sense, according to the type of FiniteElement you choose. MappingFEField implements the pure iso-parametric concept, and can be used, for example, to implement iso-geometric analysis codes in deal.II, by combining it with the FE_Bernstein finite element class. In this example, we'll use the two interchangeably, by taking into account the fact that one configuration will be a displacement
, while the other will be an absolute deformation
field.
In this tutorial program we not only refine \(\Omega\) globally, but also allow a local refinement depending on the position of \(\Gamma\), according to the value of parameters.delta_refinement
, that we use to decide how many rounds of local refinement we should do on \(\Omega\), corresponding to the position of \(\Gamma\).
With the mapping in place, it is now possible to query what is the location of all support points associated with the embedded_dh
, by calling the method DoFTools::map_dofs_to_support_points().
This method has two variants. One that does not take a Mapping, and one that takes a Mapping. If you use the second type, like we are doing in this case, the support points are computed through the specified mapping, which can manipulate them accordingly.
This is precisely what the embedded_mapping
is there for.
Once we have the support points of the embedded finite element space, we would like to identify what cells of the embedding space contain what support point, to get a chance at refining the embedding grid where it is necessary, i.e., where the embedded grid is. This can be done manually, by looping over each support point, and then calling the method Mapping::transform_real_to_unit_cell for each cell of the embedding space, until we find one that returns points in the unit reference cell, or it can be done in a more intelligent way.
The GridTools::find_active_cell_around_point is a possible option that performs the above task in a cheaper way, by first identifying the closest vertex of the embedding Triangulation to the target point, and then by calling Mapping::transform_real_to_unit_cell only for those cells that share the found vertex.
In fact, there are algorithms in the GridTools namespace that exploit a GridTools::Cache object, and possibly a KDTree object to speed up these operations as much as possible.
The simplest way to exploit the maximum speed is by calling a specialized method, GridTools::compute_point_locations, that will store a lot of useful information and data structures during the first point search, and then reuse all of this for subsequent points.
GridTools::compute_point_locations returns a tuple where the first element is a vector of cells containing the input points, in this case support_points. For refinement, this is the only information we need, and this is exactly what happens now.
When we need to assemble a coupling matrix, however, we'll also need the reference location of each point to evaluate the basis functions of the embedding space. The other elements of the tuple returned by GridTools::compute_point_locations allow you to reconstruct, for each point, what cell contains it, and what is the location in the reference cell of the given point. Since this information is better grouped into cells, then this is what the algorithm returns: a tuple, containing a vector of all cells that have at least one point in them, together with a list of all reference points and their corresponding index in the original vector.
In the following loop, we will be ignoring all returned objects except the first, identifying all cells contain at least one support point of the embedded space. This allows for a simple adaptive refinement strategy: refining these cells and their neighbors.
Notice that we need to do some sanity checks, in the sense that we want to have an embedding grid which is well refined around the embedded grid, but where two consecutive support points lie either in the same cell, or in neighbor embedding cells.
This is only possible if we ensure that the smallest cell size of the embedding grid is nonetheless bigger than the largest cell size of the embedded grid. Since users can modify both levels of refinements, as well as the amount of local refinement they want around the embedded grid, we make sure that the resulting meshes satisfy our requirements, and if this is not the case, we bail out with an exception.
In order to construct a well posed coupling interpolation operator \(C\), there are some constraints on the relative dimension of the grids between the embedding and the embedded domains. The coupling operator \(C\) and the spaces \(V\) and \(Q\) have to satisfy an inf-sup condition in order for the problem to have a solution. It turns out that the non-matching \(L^2\) projection satisfies such inf-sup, provided that the spaces \(V\) and \(Q\) are compatible between each other (for example, provided that they are chosen to be the ones described in the introduction).
However, the discrete inf-sup condition must also hold. No complications arise here, but it turns out that the discrete inf-sup constant deteriorates when the non-matching grids have local diameters that are too far away from each other. In particular, it turns out that if you choose an embedding grid which is finer with respect to the embedded grid, the inf-sup constant deteriorates much more than if you let the embedded grid be finer.
In order to avoid issues, in this tutorial we will throw an exception if the parameters chosen by the user are such that the maximal diameter of the embedded grid is greater than the minimal diameter of the embedding grid.
This choice guarantees that almost every cell of the embedded grid spans no more than two cells of the embedding grid, with some rare exceptions, that are negligible in terms of the resulting inf-sup.
\(\Omega\) has been refined and we can now set up its DoFs
We now set up the DoFs of \(\Omega\) and \(\Gamma\): since they are fundamentally independent (except for the fact that \(\Omega\)'s mesh is more refined "around" \(\Gamma\)) the procedure is standard.
By definition the stiffness matrix involves only \(\Omega\)'s DoFs
By definition the rhs of the system we're solving involves only a zero vector and \(G\), which is computed using only \(\Gamma\)'s DoFs
Creating the coupling sparsity pattern is a complex operation, but it can be easily done using the NonMatching::create_coupling_sparsity_pattern, which requires the two DoFHandler objects, the quadrature points for the coupling, a DynamicSparsityPattern (which then needs to be copied into the sparsity one, as usual), the component mask for the embedding and embedded Triangulation (which we leave empty) and the mappings for both the embedding and the embedded Triangulation.
The following function creates the matrices: as noted before computing the stiffness matrix and the rhs is a standard procedure.
Embedding stiffness matrix \(K\) and right hand sides \(F\) and \(G\).
To compute the coupling matrix we use the NonMatching::create_coupling_mass_matrix tool, which works similarly to NonMatching::create_coupling_sparsity_pattern.
All parts have been assembled: we solve the system using the Schur complement method
Start by creating the inverse stiffness matrix
Initializing the operators, as described in the introduction
Using the Schur complement method
The following function simply generates standard result output on two separate files, one for each mesh.
The only difference between the two output routines is that in the second case, we want to output the data on the current configuration, and not on the reference one. This is possible by passing the actual embedded_mapping to the DataOut::build_patches function. The mapping will take care of outputting the result on the actual deformed configuration.
Similar to all other tutorial programs, the run()
function simply calls all other methods in the correct order. Nothing special to note, except that we check if parsing was done before we actually attempt to run our program.
Differently to what happens in other tutorial programs, here we use ParameterAcceptor style of initialization, i.e., all objects are first constructed, and then a single call to the static method ParameterAcceptor::initialize is issued to fill all parameters of the classes that are derived from ParameterAcceptor.
We check if the user has specified a parameter file name to use when the program was launched. If so, try to read that parameter file, otherwise, try to read the file "parameters.prm".
If the parameter file that was specified (implicitly or explicitly) does not exist, ParameterAcceptor::initialize will create one for you, and exit the program.
The directory in which this program is run does not contain a parameter file by default. On the other hand, this program wants to read its parameters from a file called parameters.prm – and so, when you execute it the first time, you will get an exception that no such file can be found:
However, as the error message already states, the code that triggers the exception will also generate a parameters.prm file that simply contains the default values for all parameters this program cares about. By inspection of the parameter file, we see the following:
If you now run the program, you will get a file called used_parameters.prm
, containing a shorter version of the above parameters (without comments and documentation), documenting all parameters that were used to run your program:
The rationale behind creating first parameters.prm
file (the first time the program is run) and then a used_parameters.prm
(every other times you run the program), is because you may want to leave most parameters to their default values, and only modify a handful of them.
For example, you could use the following (perfectly valid) parameter file with this tutorial program:
and you would obtain exactly the same results as in test case 1 below.
For the default problem the value of \(u\) on \(\Gamma\) is set to the constant \(1\): this is like imposing a constant Dirichlet boundary condition on \(\Gamma\), seen as boundary of the portion of \(\Omega\) inside \(\Gamma\). Similarly on \(\partial \Omega\) we have zero Dirichlet boundary conditions.
The output of the program will look like the following:
If the problem was set in a three-dimensional setting, and the immersed mesh was time dependent, it would be much more expensive to recreate the mesh at each step rather than use the technique we present here. Moreover, you may be able to create a very fast and optimized solver on a uniformly refined square or cubic grid, and embed the domain where you want to perform your computation using the technique presented here. This would require you to only have a surface representation of your domain (a much cheaper and easier mesh to produce).
To play around a little bit, we are going to complicate a little the fictitious domain as well as the boundary conditions we impose on it.
If we use the following parameter file :
We get a "flowery" looking domain, where we impose a linear boundary condition \(g=x-.5\). This test shows that the method is actually quite accurate in recovering an exactly linear function from its boundary conditions, and even though the meshes are not aligned, we obtain a pretty good result.
Replacing \(x-.5\) with \(2(x-.5)^2-2(y-.5)^2\), i.e., modifying the parameter file such that we have
produces the saddle on the right.
spacedim
equal to threeWhile the current tutorial program is written for spacedim
equal to two, there are only minor changes you have to do in order for the program to run in different combinations of dimensions.
If you want to run with spacedim
equal to three and dim
equal to two, then you will almost certainly want to perform the following changes:
We have seen in other tutorials (for example in step-5 and step-54) how to read grids from input files. A nice generalization for this tutorial program would be to allow the user to select a grid to read from the parameter file itself, instead of hardcoding the mesh type in the tutorial program itself.
At the moment, we have no preconditioner on the Schur complement. This is ok for two dimensional problems, where a few hundred iterations bring the residual down to the machine precision, but it's not going to work in three dimensions.
It is not obvious what a good preconditioner would be here. The physical problem we are solving with the Schur complement, is to associate to the Dirichlet data \(g\), the value of the Lagrange multiplier \(\lambda\). \(\lambda\) can be interpreted as the jump in the normal gradient that needs to be imposed on \(u\) across \(\Gamma\), in order to obtain the Dirichlet data \(g\).
So \(S\) is some sort of Neumann to Dirichlet map, and we would like to have a good approximation for the Dirichlet to Neumann map. A possibility would be to use a Boundary Element approximation of the problem on \(\Gamma\), and construct a rough approximation of the hyper-singular operator for the Poisson problem associated to \(\Gamma\), which is precisely a Dirichlet to Neumann map.
The simple code proposed here can serve as a starting point for more complex problems which, to be solved, need to be run on parallel code, possibly using distributed meshes (see step-17, step-40, and the documentation for parallel::shared::Triangulation and parallel::distributed::Triangulation).
When using non-matching grids in parallel a problem arises: to compute the matrix \(C\) a process needs information about both meshes on the same portion of real space but, when working with distributed meshes, this information may not be available, because the locally owned part of the \(\Omega\) triangulation stored on a given processor may not be physically co-located with the locally owned part of the \(\Gamma\) triangulation stored on the same processor.
Various strategies can be implemented to tackle this problem:
The latter strategy is clearly the easiest to implement, as most of the functions used in this tutorial program will work unchanged also in the parallel case. Of course one could use the reversal strategy (that is, have a distributed embedded Triangulation and a shared embedding Triangulation).
However, this strategy is most likely going to be more expensive, since by definition the embedding grid is larger than the embedded grid, and it makes more sense to distribute the largest of the two grids, maintaining the smallest one shared among all processors.