![]() |
deal.II version GIT relicensing-2659-g040196caa3 2025-02-18 14:20:01+00:00
|
This tutorial depends on step-47.
This program was contributed by Andrea Bonito (Texas A&M University) and Diane Guignard (University of Ottawa).
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1817691. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
In this example, we consider the local discontinuous Galerkin (LDG) method for approximating the solution to the bi-Laplacian problem,
\begin{align*} \Delta^2 u & = f \quad \mbox{in } \Omega, \\ \nabla u & = \mathbf{0} \quad \mbox{on } \partial\Omega, \\ u & = 0 \quad \mbox{on } \partial\Omega, \end{align*}
where \(\Omega\subset\mathbb{R}^d\) \((d=2,3)\) is an open bounded Lipschitz domain and \(f\in L^2(\Omega)\). This is the same problem we have already considered in step-47, but we will take here a different approach towards solving it: Rather than using continuous finite elements and the interior penalty method, we consider discontinuous finite elements and the local discontinuous Galerkin method defined using lifting operators.
The weak formulation of this problem reads as follows: find \(u\in H_0^2(\Omega)\) such that
\[ \int_{\Omega}D^2u:D^2v = \int_{\Omega}fv \qquad \forall \, v\in H_0^2(\Omega), \]
where \(D^2v\) denotes the Hessian of \(v\) and \(H_0^2(\Omega)\dealcoloneq\{v\in H^2(\Omega): \,\, v=0 \mbox{ and } \nabla v=\mathbf{0} \,\, \mbox{ on } \partial\Omega\}\). Using so-called lifting operators as well as the Nitsche approach to impose the homogeneous Dirichlet boundary conditions, the LDG approximation of this problem consists of replacing the Hessians by discrete Hessians (see below) and adding penalty terms involving properly scaled jump terms. In particular, the versatility of the method described below is of particular interest for nonlinear problems or problems with intricate weak formulations for which the design of discrete algorithms is challenging.
For \(h>0\), let \(\mathcal{T}_h\) be a partition of \(\Omega\) into quadrilateral (hexahedral if \(d=3\)) elements \(K\) of diameter \(h_{K}\leq h\) and let \(\mathcal{E}_h=\mathcal{E}_h^0\cup\mathcal{E}_h^b\) denote the set of (interior and boundary) faces. We restrict the discussion to conforming subdivisions to avoid technicalities already addressed in previous tutorials. The diameter of \(e \in \mathcal{E}_h\) is denoted \(h_e\). For any integer \(k\ge 2\), we introduce the (discontinuous) finite element space
\[ \mathbb{V}_h\dealcoloneq\left\{v_h\in L^2(\Omega): \,\, v_h|_K\circ F_{K}\in\mathbb{Q}_k \quad \forall \, K \in\mathcal{T}_h \right\}, \]
where \(F_{K}\) is the map from the reference element \(\hat{K}\) (unit square/cube) to the physical element \(K\). For \(v_h\in\mathbb{V}_h\), the piecewise differential operators are denoted with a subscript \(h\), for instance \(\nabla_h v_h|_K=\nabla(v_h|_K)\) and \(D_h^2 v_h=\nabla_h\nabla_h v_h\). For \(e\in\mathcal{E}_h\), we assign a normal \(\mathbf{n}_e\). The choice of normal is irrelevant except that when \(e\) is a boundary face, \(\mathbf{n}_e\) is the normal pointing outward \(\Omega\).
The piecewise differential operators do not have enough information to be accurate approximations of their continuous counterparts. They are missing inter-element information.
This leads to the introductions of jump and average operators:
\[ \jump{v_h}|_e \dealcoloneq \left\{\begin{array}{ll} v_h|_{K_1}-v_h|_{K_2} & e\in\mathcal{E}_h^0 \\ v_h|_{K_1} & e\in\mathcal{E}_h^b \end{array}\right. \quad \mbox{and} \quad \average{v_h}|_e \dealcoloneq \left\{\begin{array}{ll} \frac{1}{2}(v_h|_{K_1}+v_h|_{K_2}) & e\in\mathcal{E}_h^0 \\ v_h|_{K_1} & e\in\mathcal{E}_h^b, \end{array}\right. \]
respectively, where \(K_1\) and \(K_2\) are the two elements adjacent to \(e\) so that \(\mathbf{n}_e\) points from \(K_1\) to \(K_2\) (with obvious modification when \(e\) is a boundary edge). These are the same operators that we have previously used not only in step-47, but also in other tutorials related to discontinuous Galerkin methods (e.g., step-12).
With these notations, we are now in position to define the discrete/reconstructed Hessian \(H_h(v_h)\in\left[L^2(\Omega)\right]^{d\times d}\) of \(v_h\in\mathbb{V}_h\); that is, something that will take the role of \(D^2 v\) in the definition of the weak formulation above when moving from the continuous to the discrete formulation. We first consider two local lifting operators \(r_e:[L^2(e)]^d\rightarrow[\mathbb{V}_h]^{d\times d}\) and \(b_e:L^2(e)\rightarrow[\mathbb{V}_h]^{d\times d}\) defined for \(e\in\mathcal{E}_h\) by, respectively,
\[ r_e\left(\boldsymbol{\phi}\right) \in [\mathbb{V}_h]^{d\times d}: \, \int_{\Omega} \tau_h : r_e\left(\boldsymbol{\phi}\right) = \int_e\average{\tau_h}\mathbf{n}_e\cdot\boldsymbol{\phi} \qquad \forall \, \tau_h\in [\mathbb{V}_h]^{d\times d} \]
and
\[ b_e(\phi) \in [\mathbb{V}_h]^{d\times d}: \, \int_{\Omega} \tau_h : b_e(\phi) = \int_e\average{{\rm div}\, \tau_h}\cdot\mathbf{n}_e\phi \qquad \forall \, \tau_h\in [\mathbb{V}_h]^{d\times d}. \]
We have \({\rm supp}\,(r_e\left(\boldsymbol{\phi}\right))={\rm supp}\,(b_e(\phi))=\omega_e\), where \(\omega_e\) denotes the patch of (one or two) elements having \(e\) as part of their boundaries.
The discrete Hessian operator \(H_h:\mathbb{V}_h\rightarrow\left[L^2(\Omega)\right]^{2\times 2}\) is then given by
\[ H_h(v_h) \dealcoloneq D_h^2 v_h -R_h(\jump{\nabla_h v_h})+B_h(\jump{v_h}) \dealcoloneq D_h^2 v_h - \sum_{e\in\mathcal{E}_h}r_e\left(\jump{\nabla_h v_h}\right)+\sum_{e\in\mathcal{E}_h}b_e\left(\jump{v_h}\right). \]
Note that other differential operators (e.g., gradient or divergence) can be reconstructed in a similar fashion, see for instance [179].
The discrete Hessian \(H_h\) is designed such that it weakly converges to the continuous Hessian \(D^2\), see the note in the next section for a precise statement. As already mentioned above, the broken Hessian is not a suitable candidate as it contains no information about inter-element jumps. We provide here an informal discussion motivating the definition of the two lifting operators and we refer to [182] and [37] for more details (although the definitions are slightly different unless the mesh is affine). The goal is then to construct a discrete operator \(H_h\) such that for all \(\tau\in [C_0^{\infty}(\Omega)]^{d\times d}\) we have
\[ \int_{\Omega}H_h(v_h):\tau\longrightarrow \int_{\Omega}D^2v:\tau \qquad \mbox{as } \,\, h\rightarrow 0 \]
for any sequence \(\{v_h\}_{h>0}\) in \(\mathbb{V}_h\) such that \(v_h\rightarrow v\) in \(L^2(\Omega)\) as \(h\rightarrow 0\) for some \(v\in H^2(\Omega)\). Let \(\tau\in [C_0^{\infty}(\Omega)]^{d\times d}\). Integrating by parts twice we get
\[ \int_{\Omega}D^2v:\tau = -\int_{\Omega}\nabla v\cdot \mbox{div}(\tau) = \int_{\Omega}v \mbox{ div}(\mbox{div}(\tau)) \]
while
\[ \int_{\Omega}v_h \mbox{ div}(\mbox{div}(\tau)) \longrightarrow \int_{\Omega}v \mbox{ div}(\mbox{div}(\tau)) \qquad \mbox{as } \,\, h\rightarrow 0. \]
Now, we integrate two times by parts the left term, taking into account that \(v_h\) is not necessarily continuous across interior faces. For any \(K\in\mathcal{T}_h\) we have
\[ \int_K v_h \mbox{ div}(\mbox{div}(\tau)) = -\int_K \nabla v_h\cdot \mbox{div}(\tau) + \int_{\partial K} v_h \mbox{ div}(\tau)\cdot \mathbf{n}_K =\int_K D^2v_h:\tau - \int_{\partial K}\nabla v_h\cdot (\tau\mathbf{n}_K) + \int_{\partial K} v_h \mbox{ div}(\tau)\cdot \mathbf{n}_K, \]
where \(\mathbf{n}_K\) denotes the outward unit normal to \(K\). Then, summing over the elements \(K\in\mathcal{T}_h\) and using that \(\tau\) is smooth, we obtain
\[ \int_{\Omega} v_h \mbox{ div}(\mbox{div}(\tau)) = \int_{\Omega} D_h^2v_h:\tau - \sum_{e\in\mathcal{E}_h}\int_e\jump{\nabla_h v_h}\cdot \average{\tau}\mathbf{n}_e + \sum_{e\in\mathcal{E}_h}\int_e v_h \average{\mbox{div}(\tau)}\cdot \mathbf{n}_e \]
which reveals the motivation for the definition of the two lifting operators: if \(\tau\) was an admissible test function, then the right-hand side would be equal to \(\int_{\Omega}H_h(v_h):\tau\) and we would have shown the desired (weak) convergence. Actually, if we add and subtract \(\tau_h\), the Lagrange interpolant of \(\tau\) in \([\mathbb{V}_h\cap H_0^1(\Omega)]^{d\times d}\), we can show that the right-hand side is indeed equal to \(\int_{\Omega}H_h(v_h):\tau\) up to terms that tends to zero as \(h\rightarrow 0\) under appropriate assumptions on \(v_h\).
It is worth mentioning that defining \(H_h\) without the lifting operators \(r_e\) and \(b_e\) for \(e\in\mathcal{E}_h^b\) would not affect the weak convergence property (the integrals over boundary faces are zero since \(\tau\) is compactly supported in \(\Omega\)). However, they are included in \(H_h\) to ensure that the solution of the discrete problem introduced in the next section satisfies the homogeneous Dirichlet boundary conditions in the limit \(h\rightarrow 0\).
The proposed LDG approximation of the bi-Laplacian problem reads: find \(u_h\in\mathbb{V}_h\) such that
\[ A_h(u_h,v_h)\dealcoloneq a_h(u_h,v_h)+j_h(u_h,v_h) = F_h(v_h) \qquad \forall \, v_h\in\mathbb{V}_h, \]
where
\begin{align*} a_h(u_h,v_h) & \dealcoloneq \int_{\Omega}H_h(u_h):H_h(v_h), \\ j_h(u_h,v_h) & \dealcoloneq \gamma_1\sum_{e\in\mathcal{E}_h}h_e^{-1}\int_e\jump{\nabla_h u_h}\cdot\jump{\nabla_h v_h}+\gamma_0\sum_{e\in\mathcal{E}_h}h_e^{-3}\int_e\jump{u_h}\jump{v_h}, \\ F_h(v_h) & \dealcoloneq \int_{\Omega}fv_h. \end{align*}
Here, \(\gamma_0,\gamma_1>0\) are penalty parameters.
Let \(\{\varphi_i\}_{i=1}^{N_h}\) be the standard basis functions that generate \(\mathbb{V}_h\). We can then express the solution as \(u_h=\sum_{j=1}^{N_h}U_j\varphi_j\) and the problem reads: find \(\boldsymbol{U}=(U_j)_{j=1}^{N_h}\in\mathbb{R}^{N_h}\) such that
\[ A\boldsymbol{U} = \boldsymbol{F}, \]
where \(A=(A_{ij})_{i,j=1}^{N_h}\in\mathbb{R}^{N_h\times N_h}\) and \(\boldsymbol{F}=(F_i)_{i=1}^{N_h}\in\mathbb{R}^{N_h}\) are defined by
\[ A_{ij}\dealcoloneq A_h(\varphi_j,\varphi_i) \quad \text{and} \quad F_i\dealcoloneq F_h(\varphi_i), \qquad 1\leq i,j \leq N_h. \]
\[ \|v_h\|_{H_h^2(\Omega)}^2\dealcoloneq\|D_h^2v_h\|_{L^2(\Omega)}^2+\sum_{e\in\mathcal{E}_h}h_e^{-1}\|\jump{\nabla_h v_h}\|_{L^2(e)}^2+\sum_{e\in\mathcal{E}_h}h_e^{-3}\|\jump{v_h}\|_{L^2(e)}^2 \]
for any choice of penalty parameters \(\gamma_0,\gamma_1>0\). In other words, the stability of the method is ensured for any positive parameters. This is in contrast with interior penalty methods for which they need to be large enough. (See also the discussions about penalty parameters in the step-39, step-47, and step-74 programs.)As in step-47, we could consider \(C^0\) finite element approximations by replacing FE_DGQ<dim>
by FE_Q<dim>
(and include the appropriate header file deal.II/fe/fe_q.h
) in the program below. In this case, the jump of the basis functions across any interior face is zero, and thus \(b_e\left(\jump{\varphi_i}\right)=\mathbf{0}\) for all \(e\in\mathcal{E}_h^0\), and could be dropped to save computational time. While an overkill for the bi-Laplacian problem, the flexibility of fully discontinuous methods combined with reconstructed differential operators is advantageous for nonlinear problems.
As customary, we assemble the matrix \(A\) and the right-hand side \(\boldsymbol{F}\) by looping over the elements \(K\in\mathcal{T}_h\). Since we are using discontinuous finite elements, the support of each \(\varphi_i\) is only one element \(K\in\mathcal{T}_h\). However, due to the lifting operators, the support of \(H_h(\varphi_i)\) is \(K\) plus all the neighbors of \(K\) (recall that for \(e\in \mathcal{E}_h\), the support of the lifting operators \(r_e\) and \(b_e\) is \(\omega_e\)). Therefore, when integrating over a cell \(K_c\), we need to consider the following interactions (case \(d=2\))
![]() |
|
The last of these accounts that the lifted shape functions from one of the neighbor cells may overlap on \(K_c\) with the lifted shape functions of another neighbor cell, as mentioned above. In other words, we need to compute the discrete Hessian of all the basis functions with support on \(K_c\) as well as all the basis functions with support on the neighboring cells of \(K_c\). This is done in the function compute_discrete_hessians
. A cell \(K_c\) can have fewer than four neighbors (six when \(d=3\)) when at least one face \(e\subset\partial K_c\) is part of the boundary of the domain. It can also have more neighbors when hanging nodes are present. To simplify the presentation we do not discuss the latter.
Due to the local support of the basis functions, many of the terms of the discrete Hessian are zero. For any basis function \(\varphi^c\) with support on \(K_c\) we have \(r_e\left(\jump{\nabla_h\varphi^c}\right)\not\equiv 0\) only if \(e\subset\partial K_c\), and similarly for \(b_e\left(\jump{\varphi^c}\right)\). Therefore, the discrete Hessian of \(\varphi^c\) reduces to
\[ H_h(\varphi^c)=D_h^2\varphi^c-\sum_{e\subset\partial K}r_e\left(\jump{\nabla_h \varphi^c}\right)+\sum_{e\subset\partial K}b_e\left(\jump{\varphi^c}\right). \]
Furthermore, since we integrate on \(K_c\), we only need to evaluate the discrete Hessian at quadrature points \(x_q\) that belong to \(K_c\), namely \(H_h(\varphi^c)(x_q)\). We store this information in
\[ {\rm compute\_discrete\_hessians[i][q]}, \qquad 0\leq {\rm i} < {\rm n\_dofs}, \,\, 0\leq {\rm q} < {\rm n\_q\_points}, \]
where n_dofs = fe_values.dofs_per_cell
is the number of degrees of freedom per cell and n_q_points = quad.size()
is the number of quadrature points on \(K_c\). For any basis function \(\varphi^n\) with support on a neighboring cell, the discrete Hessian \(H_h(\varphi^n)\) evaluated on \(K_c\) contains only the two lifting terms, but not the term involving \(D^2_h\varphi^n\), since \(\varphi^n|_{K}\equiv 0\). Moreover, only the lifting over the common face \(e\) is nonzero on \(K_c\), namely for all \(x_q\in K_c\)
\[ H_h(\varphi^n)(x_q)=-r_e\left(\jump{\nabla_h\varphi^n}\right)(x_q)+b_e\left(\jump{\varphi^n}\right)(x_q). \]
This information is stored in
\[ {\rm compute\_discrete\_hessians\_neigh[face\_no][i][q]}, \qquad 0\leq {\rm face\_no} < {\rm n\_faces}, \,\, 0\leq {\rm i} < {\rm n\_dofs}, \,\, 0\leq {\rm q} < {\rm n\_q\_points}, \]
where n_dofs
and n_q_points
are as above, and n_faces = GeometryInfo<dim>::faces_per_cell
is the number of faces of \(K_c\). As we shall see in the next section, we will only need to solve half of the local problems for the lifting terms.
discrete_hessians_neigh
is of size n_faces x n_dofs x n_q_points
. However, we only need to consider the interior faces, namely we do not need to fill discrete_hessians_neigh[face_no][i][q]
whenever face_no
corresponds to a boundary face. We could then save a little bit of storage by considering \(0\leq {\rm face\_no} < {\rm n\_faces}\) with n_faces
the actual number of neighboring cells, i.e., not counting the boundary faces. By doing so, we could also avoid testing if a face lies on the boundary in the assembly of the matrix.We now describe the computation of the lifting operators \(r_e\) and \(b_e\) on each cell \(K_c\). This turns out to be a bit cumbersome, but it follows similar schemes as other reconstruction operators – see, for example, the "weak Galerkin" approach on step-61 or the "hybridizable discontinuous Galerkin" method in step-51. We focus on \(b_e\) for an interior face \(e\in\mathcal{E}_h^0\), but the other cases are treated similarly.
We have \(e=\partial K_c\cap \partial K_n\) for some neighbor \(K_n\) of \(K_c\). For a basis function \(\varphi\in\mathbb{V}_h\) with support on \(K_c\) or \(K_n\) (for the other basis functions we have \(b_e\left(\jump{\varphi}\right)\equiv 0\)), we write \(b_e\left(\jump{\varphi}\right)\in[\mathbb{V}_h]^{d\times d}\) as
\[ b_e\left(\jump{\varphi}\right)=\sum_{n=1}^{N_c+N_n}B_n\psi_n, \]
where \(\{\psi_n\}_{n=1}^{N_c}\) and \(\{\psi_n\}_{n=N_c+1}^{N_c+N_n}\) are the basis functions of \([\mathbb{V}_h]^{d\times d}\) which have support on \(K_c\) and \(K_n\), respectively. The coefficients \(\boldsymbol{B}=(B_n)_{n=1}^{N_c+N_n}\in\mathbb{R}^{N_c+N_c}\) of the lifting operator \(b_e\) are obtain upon solving the linear system
\[ M\boldsymbol{B}=\boldsymbol{G}, \]
where the components of the (local) mass matrix and the right-hand side are given respectively by
\[ M_{mn}\dealcoloneq\int_{\Omega}\psi_n:\psi_m \quad \mbox{and} \quad G_m\dealcoloneq\int_e\average{{\rm div}\, \psi_m}\cdot \mathbf{n}_e\jump{\varphi}, \qquad 1\leq m,n \leq N_c+N_n. \]
Note that this system has the decoupled form
\[ \left[\begin{array}{cc} M_c & \mathbf{0} \\ \mathbf{0} & M_n \end{array}\right]\left[\begin{array}{c} \boldsymbol{B}_c \\ \boldsymbol{B}_n \end{array}\right]=\left[\begin{array}{c} \boldsymbol{G}_c \\ \boldsymbol{G}_n \end{array}\right] \]
with \(M_c\in\mathbb{R}^{N_c\times N_c}\), \(M_n\in\mathbb{R}^{N_n\times N_n}\), \(\boldsymbol{B}_c,\boldsymbol{G}_c\in\mathbb{R}^{N_c}\), and \(\boldsymbol{B}_n,\boldsymbol{G}_n\in\mathbb{R}^{N_n}\).
In fact, since we evaluate the discrete Hessians at quadrature points \(x_q\in K_c\) and \(\psi_n|_{K_c}\equiv 0\) for \(n=N_c+1,\ldots,N_c+N_n\), we have
\[ b_e\left(\jump{\varphi}\right)(x_q)=\sum_{n=1}^{N_c+N_n}B_n\psi_n(x_q)=\sum_{n=1}^{N_c}B_n\psi_n(x_q). \]
As a consequence, only the coefficients \(B_n\), \(n=1,\ldots,N_c\), are needed.
To compute the components \(G_m\), \(m=1,\ldots,N_c\), we take advantage of the relation
\[ \mathbf{n}_e\jump{\varphi}=\mathbf{n}_{K_c}\varphi|_{K_c}+\mathbf{n}_{K_n}\varphi|_{K_n}, \]
where \(\mathbf{n}_{K_c}\) (resp. \(\mathbf{n}_{K_n}\)) denotes the outward unit normal to \(K_c\) (resp. \(K_n\)). Therefore, if \(\varphi=\varphi^c\), namely \(\varphi\) has support on the current cell \(K_c\), then
\[ G_m=\int_e\average{{\rm div}\, \psi_m}\cdot\mathbf{n}_e\jump{\varphi^c}=\frac{1}{2}\int_e{\rm div}\, \psi_m\cdot\mathbf{n}_{K_c}\varphi^c, \]
while if \(\varphi=\varphi^n\), namely \(\varphi\) has support on the neighboring cell \(K_n\), then
\[ G_m=\int_e\average{{\rm div}\, \psi_m}\cdot\mathbf{n}_e\jump{\varphi^n}=\frac{1}{2}\int_e{\rm div}\, \psi_m\cdot\mathbf{n}_{K_n}\varphi^n. \]
The factor \(\frac{1}{2}\) comes from the average operator as \(e\) is assumed to be an interior face.
The performance of the numerical algorithm will be assessed using a manufactured solution \(u:(0,1)^d\rightarrow\mathbb{R}\) given by
\[ u(x,y)=x^2(1-x)^2y^2(1-y)^2 \]
if \(d=2\), while if \(d=3\) we take
\[ u(x,y,z)=x^2(1-x)^2y^2(1-y)^2z^2(1-z)^2. \]
For different values of \(h\), we will report the error \(u-u_h\) measured in the discrete \(H^2\) metric (defined above but extended to piecewise \(H^2\) functions), the discrete \(H^1\) metric
\[ \|v\|_{H_h^1(\Omega)}^2 \dealcoloneq \|\nabla_h v\|_{L^2(\Omega)}^2+\sum_{e\in\mathcal{E}_h}h_e^{-1}\|\jump{v}\|_{L^2(e)}^2, \quad v\in \prod_{K\in\mathcal{T}_h}H^1(K), \]
as well as the \(L^2\) metric.
All the include files have already been discussed in previous tutorials.
The following three header files are for the solvers. The linear system is solved using a direct method while the conjugate gradient method is used to solve the local problems for the lifting terms.
BiLaplacianLDGLift
class templateThe main class of this program is similar to that of step-3 or step-20, as well as many other tutorial programs. The key function here is compute_discrete_hessians()
which, as its name suggests, computes the discrete Hessians needed for the assembly of the matrix \(A\).
As indicated by its name, the function assemble_local_matrix()
is used for the assembly of the (local) mass matrix used to compute the two lifting terms (see the matrix \(\boldsymbol{M}_c\) introduced in the introduction when describing the computation of \(b_e\)). The function compute_discrete_hessians()
computes the required discrete Hessians: the discrete Hessians of the basis functions with support on the current cell
(stored in the output variable discrete_hessians
) and the basis functions with support on a neighbor of the current cell
(stored in the output variable discrete_hessians_neigh
). More precisely, discrete_hessians[i][q_point]
stores \(H_h(\varphi_i)(x_q)\), where \(\varphi_i\) is a basis function with support on cell, while discrete_hessians_neigh[face_no][i][q_point]
stores \(H_h(\varphi_i)(x_q)\), where \(\varphi_i\) is a basis function of the neighboring cell adjacent to the face face=cell->face(face_no)
.
We also need a variable that describes the finite element space \([\mathbb{V}_h]^{d\times d}\) used for the two lifting operators. The other member variables below are as in most of the other tutorial programs.
Finally, the last two variables correspond to the penalty coefficients \(\gamma_1\) and \(\gamma_0\) for the jump of \(\nabla_hu_h\) and \(u_h\), respectively.
This class implement the right-hand side \(f=\Delta^2 u\) corresponding to the manufactured solution \(u\) defined in the introduction.
This class implement the manufactured (exact) solution \(u\). To compute the errors, we need the value of \(u\) as well as its gradient and its Hessian.
BiLaplacianLDGLift
classIn the constructor, we set the polynomial degree of the two finite element spaces, we associate the corresponding DoF handlers to the triangulation, and we set the two penalty coefficients.
To build a mesh for \(\Omega=(0,1)^d\), we simply call the function GridGenerator::hyper_cube
and then refine it using refine_global
. The number of refinements is hard-coded here.
In the following function, we set up the degrees of freedom, the sparsity pattern, the size of the matrix \(A\), and the size of the solution and right-hand side vectors \(\boldsymbol{U}\) and \(\boldsymbol{F}\). For the sparsity pattern, we cannot directly use the function DoFTools::make_flux_sparsity_pattern
(as we would do for instance for the SIPG method) because we need to take into account the interactions of a neighboring cell with another neighboring cell as described in the introduction. The extended sparsity pattern is built by iterating over all the active cells. For the current cell, we collect all its degrees of freedom as well as the degrees of freedom of all its neighboring cells, and then couple everything with everything.
At the end of the function, we output this sparsity pattern as a scalable vector graphic. You can visualize it by loading this file in most web browsers:
This function simply calls the two functions responsible for the assembly of the matrix and the right-hand side.
This function assembles the matrix \(A\) whose entries are defined by \(A_{ij}=A_h(\varphi_j,\varphi_i)\) which involves the product of discrete Hessians and the penalty terms.
As indicated in the introduction, the following matrices are used for the contributions of the products of the discrete Hessians.
The following matrices are used for the contributions of the two penalty terms:
We now compute all the discrete Hessians that are not vanishing on the current cell, i.e., the discrete Hessian of all the basis functions with support on the current cell or on one of its neighbors.
First, we compute and add the interactions of the degrees of freedom of the current cell.
Next, we compute and add the interactions of the degrees of freedom of the current cell with those of its neighbors. Note that the interactions of the degrees of freedom of a neighbor with those of the same neighbor are included here.
There is nothing to be done if boundary face (the liftings of the Dirichlet BCs are accounted for in the assembly of the RHS; in fact, nothing to be done in this program since we prescribe homogeneous BCs).
We now compute and add the interactions of the degrees of freedom of a neighboring cells with those of another neighboring cell (this is where we need the extended sparsity pattern).
Dirichlet BCs are accounted for in the assembly of the RHS; in fact, nothing to be done in this program since we prescribe homogeneous BCs)
Finally, we compute and add the two penalty terms.
In the next step, we need to have a global way to compare the cells in order to not calculate the same jump term twice:
This function assemble the right-hand side of the system. Since we consider homogeneous Dirichlet boundary conditions, imposed weakly in the bilinear form using the Nitsche approach, it only involves the contribution of the forcing term \(\int_{\Omega}fv_h\).
To solve the linear system \(A\boldsymbol{U}=\boldsymbol{F}\), we proceed as in step-74 and use a direct method. We could as well use an iterative method, for instance the conjugate gradient method as in step-3.
This function computes the discrete \(H^2\), \(H^1\) and \(L^2\) norms of the error \(u-u_h\), where \(u\) is the exact solution and \(u_h\) is the approximate solution. See the introduction for the definition of the norms.
We introduce some variables for the exact solution...
...and for the approximate solution:
We first add the bulk terms.
We then add the face contributions.
In the next step, we need to have a global way to compare the cells in order to not calculate the same jump term twice:
To compute the jump term, we use the fact that \(\jump{u}=0\) and \(\jump{\nabla u}=\mathbf{0}\) since \(u\in H^2(\Omega)\).
This function, which writes the solution to a vtk file, is copied from step-3.
As already mentioned above, this function is used to assemble the (local) mass matrices needed for the computations of the lifting terms. We reiterate that only the basis functions with support on the current cell are considered.
This function is the main novelty of this program. It computes the discrete Hessian \(H_h(\varphi)\) for all the basis functions \(\varphi\) of \(\mathbb{V}_h\) supported on the current cell and those supported on a neighboring cell. The first argument indicates the current cell (referring to the global DoFHandler object), while the other two arguments are output variables that are filled by this function.
In the following, we need to evaluate finite element shape functions for the fe_lift
finite element on the current cell. Like for example in step-61, this "lift" space is defined on every cell individually; as a consequence, there is no global DoFHandler associated with this because we simply have no need for such a DoFHandler. That leaves the question of what we should initialize the FEValues and FEFaceValues objects with when we ask them to evaluate shape functions of fe_lift
on a concrete cell. If we simply provide the first argument to this function, cell
, to FEValues::reinit(), we will receive an error message that the given cell
belongs to a DoFHandler that has a different finite element associated with it than the fe_lift
object we want to evaluate. Fortunately, there is a relatively easy solution: We can call FEValues::reinit() with a cell that points into a triangulation – the same cell, but not associated with a DoFHandler, and consequently no finite element space. In that case, FEValues::reinit() will skip the check that would otherwise lead to an error message. All we have to do is to convert the DoFHandler cell iterator into a Triangulation cell iterator; see the first couple of lines of the function below to see how this can be done.
The information we need from the basis functions of \(\mathbb{V}_h\): fe_values
is needed to add the broken Hessian part of the discrete Hessian, while fe_face
and fe_face_neighbor
are used to compute the right-hand sides for the local problems.
The information needed from the basis functions of the finite element space for the lifting terms: fe_values_lift
is used for the (local) mass matrix (see \(\boldsymbol{M}_c\) in the introduction), while fe_face_lift
is used to compute the right-hand sides (see \(\boldsymbol{G}_c\) for \(b_e\)).
We start by assembling the (local) mass matrix used for the computation of the lifting terms \(r_e\) and \(b_e\).
In this loop, we compute the discrete Hessian at each quadrature point \(x_q\) of cell
for each basis function supported on cell
, namely we fill-in the variable discrete_hessians[i][q]
. For the lifting terms, we need to add the contribution of all the faces of cell
.
Recall that by convention, the average of a function across a boundary face \(e\) reduces to the trace of the function on the only element adjacent to \(e\), namely there is no factor \(\frac{1}{2}\). We distinguish between the two cases (the current face lies in the interior or on the boundary of the domain) using the variable factor_avg
.
Here, local_rhs_be(m)
corresponds to \(G_m\) introduced in the comments about the implementation of the lifting \(b_e\) in the case \(\varphi=\varphi^c\).
In this loop, we compute the discrete Hessian at each quadrature point \(x_q\) of cell
for each basis function supported on a neighboring neighbor_cell
of cell
, namely we fill-in the variable discrete_hessians_neigh[face_no][i][q]
. For the lifting terms, we only need to add the contribution of the face adjacent to cell
and neighbor_cell
.
For non-homogeneous Dirichlet BCs, we would need to compute the lifting of the prescribed BC (see the "Possible Extensions" section for more details).
Here, local_rhs_be(m)
corresponds to \(G_m\) introduced in the comments about the implementation of the lifting \(b_e\) in the case \(\varphi=\varphi^n\).
main
functionThis is the main
function. We define here the number of mesh refinements, the polynomial degree for the two finite element spaces (for the solution and the two liftings) and the two penalty coefficients. We can also change the dimension to run the code in 3d.
When running the program, the sparsity pattern is written to an svg file, the solution is written to a vtk file, and some results are printed to the console. With the current setup, the output should read
This corresponds to the bi-Laplacian problem with the manufactured solution mentioned above for \(d=2\), 3 refinements of the mesh, degree \(k=2\), and \(\gamma_0=\gamma_1=1\) for the penalty coefficients. By changing the number of refinements, we get the following results:
n_ref | n_cells | n_dofs | error H2 | rate | error H1 | rate | error L2 | rate |
---|---|---|---|---|---|---|---|---|
1 | 4 | 36 | 5.651e-02 | – | 3.366e-03 | – | 3.473e-04 | – |
2 | 16 | 144 | 3.095e-02 | 0.87 | 1.284e-03 | 1.39 | 1.369e-04 | 1.34 |
3 | 64 | 576 | 1.511e-02 | 1.03 | 3.997e-04 | 1.68 | 5.339e-05 | 1.36 |
4 | 256 | 2304 | 7.353e-03 | 1.04 | 1.129e-04 | 1.82 | 1.691e-05 | 1.66 |
5 | 1024 | 9216 | 3.609e-03 | 1.03 | 3.024e-05 | 1.90 | 4.789e-06 | 1.82 |
6 | 4096 | 36864 | 1.785e-03 | 1.02 | 7.850e-06 | 1.95 | 1.277e-06 | 1.91 |
This matches the expected optimal convergence rates for the \(H^2\) and \(H^1\) norms, but is sub-optimal for the \(L_2\) norm. Incidentally, this also matches the results seen in step-47 when using polynomial degree \(k=2\).
Indeed, just like in step-47, we can regain the optimal convergence order if we set the polynomial degree of the finite elements to \(k=3\) or higher. Here are the numbers for \(k=3\):
n_ref | n_cells | n_dofs | error H2 | rate | error H1 | rate | error L2 | rate |
---|---|---|---|---|---|---|---|---|
1 | 4 | 36 | 1.451e-02 | – | 5.494e-04 | – | 3.035e-05 | – |
2 | 16 | 144 | 3.565e-03 | 2.02 | 6.870e-05 | 3.00 | 2.091e-06 | 3.86 |
3 | 64 | 576 | 8.891e-04 | 2.00 | 8.584e-06 | 3.00 | 1.352e-07 | 3.95 |
4 | 256 | 2304 | 2.223e-04 | 2.00 | 1.073e-06 | 3.00 | 8.594e-09 | 3.98 |
5 | 1024 | 9216 | 5.560e-05 | 2.00 | 1.341e-07 | 3.00 | 5.418e-10 | 3.99 |
6 | 4096 | 36864 | 1.390e-05 | 2.00 | 1.676e-08 | 3.00 | 3.245e-11 | 4.06 |
The code can be easily adapted to deal with the following cases:
compute_discrete_hessians
and the penalty terms in assemble_matrix
). We give below additional details for the first of these points.
If we prescribe non-homogeneous Dirichlet conditions, say
\[ \nabla u=\mathbf{g} \quad \mbox{and} \quad u=g \qquad \mbox{on } \partial \Omega, \]
then the right-hand side \(\boldsymbol{F}\) of the linear system needs to be modified as follows
\[ F_i:=\int_{\Omega}f\varphi_i-\sum_{e\in\mathcal{E}_h^b}\int_{\Omega}r_e(\mathbf{g}):H_h(\varphi_i)+\sum_{e\in\mathcal{E}_h^b}\int_{\Omega}b_e(g):H_h(\varphi_i)+\gamma_1\sum_{e\in\mathcal{E}_h^b}h_e^{-1}\int_e\mathbf{g}\cdot\nabla\varphi_i+\gamma_0\sum_{e\in\mathcal{E}_h^b}h_e^{-3}\int_e g\varphi_i, \qquad 1\leq i \leq N_h. \]
Note that for any given index \(i\), many of the terms are zero. Indeed, for \(e\in \mathcal{E}_h^b\) we have \({\rm supp}\,(r_e(\mathbf{g}))={\rm supp}\,(b_e(g))=K\), where \(K\) is the element for which \(e\subset\partial K\). Therefore, the liftings \(r_e(\mathbf{g})\) and \(b_e(g)\) contribute to \(F_i\) only if \(\varphi_i\) has support on \(K\) or a neighbor of \(K\). In other words, when integrating on a cell \(K\), we need to add
\[ \int_{K}f\varphi_i+\sum_{e\in\mathcal{E}_h^b, e\subset\partial K}\left[-\int_{K}r_e(\mathbf{g}):H_h(\varphi_i)+\int_{K}b_e(g):H_h(\varphi_i)+\gamma_1h_e^{-1}\int_e\mathbf{g}\cdot\nabla\varphi_i+\gamma_0h_e^{-3}\int_e g\varphi_i\right] \]
to \(F_i\) for the indices \(i\) such that \(\varphi_i\) has support on \(K\) and
\[ \sum_{e\in\mathcal{E}_h^b, e\subset\partial K}\left[-\int_{K}r_e(\mathbf{g}):H_h(\varphi_i)+\int_{K}b_e(g):H_h(\varphi_i)\right] \]
to \(F_i\) for the indices \(i\) such that \(\varphi_i\) has support on a neighbor of \(K\).