Reference documentation for deal.II version 9.5.0
|
Table of contents | |
---|---|
|
This program was contributed by Jean-Paul Pelteret.
The aim of this tutorial is, quite simply, to introduce the fundamentals of both automatic and symbolic differentiation (respectively abbreviated as AD and SD): Ways in which one can, in source code, describe a function \(\mathbf f(\mathbf x)\) and automatically also obtain a representation of derivatives \(\nabla \mathbf f(\mathbf x)\) (the "Jacobian"), \(\nabla^2 \mathbf f(\mathbf x)\) (the "Hessian"), etc., without having to write additional lines of code. Doing this is quite helpful in solving nonlinear or optimization problems where one would like to only describe the nonlinear equation or the objective function in the code, without having to also provide their derivatives (which are necessary for a Newton method for solving a nonlinear problem, or for finding a minimizer).
Since AD and SD tools are somewhat independent of finite elements and boundary value problems, this tutorial is going to be different to the others that you may have read beforehand. It will focus specifically on how these frameworks work and the principles and thinking behind them, and will forgo looking at them in the direct context of a finite element simulation.
We will, in fact, look at two different sets of problems that have greatly different levels of complexity, but when framed properly hold sufficient similarity that the same AD and SD frameworks can be leveraged. With these examples the aim is to build up an understanding of the steps that are required to use the AD and SD tools, the differences between them, and hopefully identify where they could be immediately be used in order to improve or simplify existing code.
It's plausible that you're wondering what AD and SD are, in the first place. Well, that question is easy to answer but without context is not very insightful. So we're not going to cover that in this introduction, but will rather defer this until the first introductory example where we lay out the key points as this example unfolds. To complement this, we should mention that the core theory for both frameworks is extensively discussed in the Automatic and symbolic differentiation module, so it bears little repeating here.
Since we have to pick some sufficiently interesting topic to investigate and identify where AD and SD can be used effectively, the main problem that's implemented in the second half of the tutorial is one of modeling a coupled constitutive law, specifically a magneto-active material (with hysteretic effects). As a means of an introduction to that, later in the introduction some grounding theory for that class of materials will be presented. Naturally, this is not a field (or even a class of materials) that is of interest to a wide audience. Therefore, the author wishes to express up front that this theory and any subsequent derivations mustn't be considered the focus of this tutorial. Instead, keep in mind the complexity of the problem that arises from the relatively innocuous description of the constitutive law, and what we might (in the context of a boundary value problem) need to derive from that. We will perform some computations with these constitutive laws at the level of a representative continuum point (so, remaining in the realm of continuum mechanics), and will produce some benchmark results around which we can frame a final discussion on the topic of computational performance.
Once we have the foundation upon which we can build further concepts, we will see how AD in particular can be exploited at a finite element (rather than continuum) level: this is a topic that is covered in step-72, as well as step-33. But before then, let's take a moment to think about why we might want to consider using these sorts of tools, and what benefits they can potentially offer you.
The primary driver for using AD or SD is typically that there is some situation that requires differentiation to be performed, and that doing so is sufficiently challenging to make the prospect of using an external tool to perform that specific task appealing. A broad categorization for the circumstances under which AD or SD can be rendered most useful include (but are probably not limited to) the following:
This tutorial program will have two parts: One where we just introduce the basic ideas of automatic and symbolic differentiation support in deal.II using a simple set of examples; and one where we apply this to a realistic but much more complicated case. For that second half, the next section will provide some background on magneto-mechanical materials – you can skip this section if all you want to learn about is what AD and SD actually are, but you probably want to read over this section if you are interested in how to apply AD and SD for concrete situations.
As a prelude to introducing the coupled magneto-mechanical material law that we'll use to model a magneto-active polymer, we'll start with a very concise summary of the salient thermodynamics to which these constitutive laws must subscribe. The basis for the theory, as summarized here, is described in copious detail by Truesdell and Toupin [169] and Coleman and Noll [56], and follows the logic laid out by Holzapfel [100].
Starting from the first law of thermodynamics, and following a few technical assumptions, it can be shown the balance between the kinetic plus internal energy rates and the power supplied to the system from external sources is given by the following relationship that equates the rate of change of the energy in an (arbitrary) volume \(V\) on the left, and the sum of forces acting on that volume on the right:
\[ D_{t} \int\limits_{V} \left[ \frac{1}{2} \rho_{0} \mathbf{v} \cdot \mathbf{v} + U^{*}_{0} \right] dV = \int\limits_{V} \left[ \rho_{0} \mathbf{v} \cdot \mathbf{a} + \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} + \mathbb{E} \cdot \dot{\mathbb{D}} - D_{t} M^{*}_{0} - \nabla_{0} \cdot \mathbf{Q} + R_{0} \right] dV . \]
Here \(D_{t}\) represents the total time derivative, \(\rho_{0}\) is the material density as measured in the Lagrangian reference frame, \(\mathbf{v}\) is the material velocity and \(\mathbf{a}\) its acceleration, \(U^{*}_{0}\) is the internal energy per unit reference volume, \(\mathbf{P}^{\text{tot}}\) is the total Piola stress tensor and \(\dot{\mathbf{F}}\) is the time rate of the deformation gradient tensor, \(\boldsymbol{\mathbb{H}}\) and \(\boldsymbol{\mathbb{B}}\) are, respectively, the magnetic field vector and the magnetic induction (or magnetic flux density) vector, \(\mathbb{E}\) and \(\mathbb{D}\) are the electric field vector and electric displacement vector, and \(\mathbf{Q}\) and \(R_{0}\) represent the referential thermal flux vector and thermal source. The material differential operator \(\nabla_{0} (\bullet) \dealcoloneq \frac{d(\bullet)}{d\mathbf{X}}\) where \(\mathbf{X}\) is the material position vector. With some rearrangement of terms, invoking the arbitrariness of the integration volume \(V\), the total internal energy density rate \(\dot{E}_{0}\) can be identified as
\[ \dot{E}_{0} = \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} + \mathbb{E} \cdot \dot{\mathbb{D}} - \nabla_{0} \cdot \mathbf{Q} + R_{0} . \]
The total internal energy includes contributions that arise not only due to mechanical deformation (the first term), and thermal fluxes and sources (the fourth and fifth terms), but also due to the intrinsic energy stored in the magnetic and electric fields themselves (the second and third terms, respectively).
The second law of thermodynamics, known also as the entropy inequality principle, informs us that certain thermodynamic processes are irreversible. After accounting for the total entropy and rate of entropy input, the Clausius-Duhem inequality can be derived. In local form (and in the material configuration), this reads
\[ \theta \dot{\eta}_{0} - R_{0} + \nabla_{0} \cdot \mathbf{Q} - \frac{1}{\theta} \nabla_{0} \theta \cdot \mathbf{Q} \geq 0 . \]
The quantity \(\theta\) is the absolute temperature, and \(\eta_{0}\) represents the entropy per unit reference volume.
Using this to replace \(R_{0} - \nabla_{0} \cdot \mathbf{Q}\) in the result stemming from the first law of thermodynamics, we now have the relation
\[ \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} + \mathbb{E} \cdot \dot{\mathbb{D}} + \theta \dot{\eta}_{0} - \dot{E}_{0} - \frac{1}{\theta} \nabla_{0} \theta \cdot \mathbf{Q} \geq 0 . \]
On the basis of Fourier's law, which informs us that heat flows from regions of high temperature to low temperature, the last term is always positive and can be ignored. This renders the local dissipation inequality
\[ \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} + \mathbb{E} \cdot \dot{\mathbb{D}} - \left[ \dot{E}_{0} - \theta \dot{\eta}_{0} \right] \geq 0 . \]
It is postulated [100] that the Legendre transformation
\[ \psi^{*}_{0} = \psi^{*}_{0} \left( \mathbf{F}, \boldsymbol{\mathbb{B}}, \mathbb{D}, \theta \right) = E_{0} - \theta \eta_{0} , \]
from which we may define the free energy density function \(\psi^{*}_{0}\) with the stated parameterization, exists and is valid. Taking the material rate of this equation and substituting it into the local dissipation inequality results in the generic expression
\[ \mathcal{D}_{\text{int}} = \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} + \mathbb{E} \cdot \dot{\mathbb{D}} - \dot{\theta} \eta_{0} - \dot{\psi}^{*}_{0} \left( \mathbf{F}, \boldsymbol{\mathbb{B}}, \mathbb{D}, \theta \right) \geq 0 . \]
Under the assumption of isothermal conditions, and that the electric field does not excite the material in a manner that is considered non-negligible, then this dissipation inequality reduces to
\[ \mathcal{D}_{\text{int}} = \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} - \dot{\psi}^{*}_{0} \left( \mathbf{F}, \boldsymbol{\mathbb{B}} \right) \geq 0 . \]
When considering materials that exhibit mechanically dissipative behavior, it can be shown that this can be captured within the dissipation inequality through the augmentation of the material free energy density function with additional parameters that represent internal variables [99]. Consequently, we write it as
\[ \mathcal{D}_{\text{int}} = \mathbf{P}^{\text{tot}} : \dot{\mathbf{F}} + \boldsymbol{\mathbb{H}} \cdot \dot{\boldsymbol{\mathbb{B}}} - \dot{\psi}^{*}_{0} \left( \mathbf{F}, \mathbf{F}_{v}^{i}, \boldsymbol{\mathbb{B}} \right) \geq 0 . \]
where \(\mathbf{F}_{v}^{i} = \mathbf{F}_{v}^{i} \left( t \right)\) represents the internal variable (which acts like a measure of the deformation gradient) associated with the i
th mechanical dissipative (viscous) mechanism. As can be inferred from its parameterization, each of these internal parameters is considered to evolve in time. Currently the free energy density function \(\psi^{*}_{0}\) is parameterized in terms of the magnetic induction \(\boldsymbol{\mathbb{B}}\). This is the natural parameterization that comes as a consequence of the considered balance laws. Should such a class of materials to be incorporated within a finite-element model, it would be ascertained that a certain formulation of the magnetic problem, known as the magnetic vector potential formulation, would need to be adopted. This has its own set of challenges, so where possible the more simple magnetic scalar potential formulation may be preferred. In that case, the magnetic problem needs to be parameterized in terms of the magnetic field \(\boldsymbol{\mathbb{H}}\). To make this re-parameterization, we execute a final Legendre transformation
\[ \tilde{\psi}_{0} \left( \mathbf{F}, \mathbf{F}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) = \psi^{*}_{0} \left( \mathbf{F}, \mathbf{F}_{v}^{i}, \boldsymbol{\mathbb{B}} \right) - \boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{B}} . \]
At the same time, we may take advantage of the principle of material frame indifference in order to express the energy density function in terms of symmetric deformation measures:
\[ \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) = \tilde{\psi}_{0} \left( \mathbf{F}, \mathbf{F}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) . \]
The upshot of these two transformations (leaving out considerable explicit and hidden details) renders the final expression for the reduced dissipation inequality as
\[ \mathcal{D}_{\text{int}} = \mathbf{S}^{\text{tot}} : \frac{1}{2} \dot{\mathbf{C}} - \boldsymbol{\mathbb{B}} \cdot \dot{\boldsymbol{\mathbb{H}}} - \dot{\psi}_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) \geq 0 . \]
(Notice the sign change on the second term on the right hand side, and the transfer of the time derivative to the magnetic induction vector.) The stress quantity \(\mathbf{S}^{\text{tot}}\) is known as the total Piola-Kirchhoff stress tensor and its energy conjugate \(\mathbf{C} = \mathbf{F}^{T} \cdot \mathbf{F}\) is the right Cauchy-Green deformation tensor, and \(\mathbf{C}_{v}^{i} = \mathbf{C}_{v}^{i} \left( t \right)\) is the re-parameterized internal variable associated with the i
th mechanical dissipative (viscous) mechanism.
Expansion of the material rate of the energy density function, and rearrangement of the various terms, results in the expression
\[ \mathcal{D}_{\text{int}} = \left[ \mathbf{S}^{\text{tot}} - 2 \frac{\partial \psi_{0}}{\partial \mathbf{C}} \right] : \frac{1}{2} \dot{\mathbf{C}} - \sum\limits_{i}\left[ 2 \frac{\partial \psi_{0}}{\partial \mathbf{C}_{v}^{i}} \right] : \frac{1}{2} \dot{\mathbf{C}}_{v}^{i} + \left[ - \boldsymbol{\mathbb{B}} - \frac{\partial \psi_{0}}{\partial \boldsymbol{\mathbb{H}}} \right] \cdot \dot{\boldsymbol{\mathbb{H}}} \geq 0 . \]
At this point, its worth noting the use of the partial derivatives \(\partial \left( \bullet \right)\). This is an important detail that will be fundamental to a certain design choice made within the tutorial. As brief reminder of what this signifies, the partial derivative of a multi-variate function returns the derivative of that function with respect to one of those variables while holding the others constant:
\[ \frac{\partial f\left(x, y\right)}{\partial x} = \frac{d f\left(x, y\right)}{d x} \Big\vert_{y} . \]
More specific to what's encoded in the dissipation inequality (with the very general free energy density function \(\psi_{0}\) with its parameterization yet to be formalized), if one of the input variables is a function of another, it is also held constant and the chain rule does not propagate any further, while the computing total derivative would imply judicious use of the chain rule. This can be better understood by comparing the following two statements:
\begin{align*} \frac{\partial f\left(x, y\left(x\right)\right)}{\partial x} &= \frac{d f\left(x, y\left(x\right)\right)}{d x} \Big\vert_{y} \\ \frac{d f\left(x, y\left(x\right)\right)}{d x} &= \frac{d f\left(x, y\left(x\right)\right)}{d x} \Big\vert_{y} + \frac{d f\left(x, y\left(x\right)\right)}{d y} \Big\vert_{x} \frac{d y\left(x\right)}{x} . \end{align*}
Returning to the thermodynamics of the problem, we next exploit the arbitrariness of the quantities \(\dot{\mathbf{C}}\) and \(\dot{\boldsymbol{\mathbb{H}}}\), by application of the Coleman-Noll procedure [56], [55]. This leads to the identification of the kinetic conjugate quantities
\[ \mathbf{S}^{\text{tot}} = \mathbf{S}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq 2 \frac{\partial \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C}} , \\ \boldsymbol{\mathbb{B}} = \boldsymbol{\mathbb{B}} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq - \frac{\partial \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} . \]
(Again, note the use of the partial derivatives to define the stress and magnetic induction in this generalized setting.) From what terms remain in the dissipative power (namely those related to the mechanical dissipative mechanisms), if they are assumed to be independent of one another then, for each mechanism i
,
\[ \frac{\partial \psi_{0}}{\partial \mathbf{C}_{v}^{i}} : \dot{\mathbf{C}}_{v}^{i} \leq 0 . \]
This constraint must be satisfied through the appropriate choice of free energy function, as well as a carefully considered evolution law for the internal variables.
In the case that there are no dissipative mechanisms to be captured within the constitutive model (e.g., if the material to be modelled is magneto-hyperelastic) then the free energy density function \(\psi_{0} = \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)\) reduces to a stored energy density function, and the total stress and magnetic induction can be simplified
\begin{align*} \mathbf{S}^{\text{tot}} = \mathbf{S}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &\dealcoloneq 2 \frac{d \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \mathbf{C}} , \\ \boldsymbol{\mathbb{B}} = \boldsymbol{\mathbb{B}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &\dealcoloneq - \frac{d \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} , \end{align*}
where the operator \(d\) denotes the total derivative operation.
For completeness, the linearization of the stress tensor and magnetic induction are captured within the fourth-order total referential elastic tangent tensor \(\mathcal{H}^{\text{tot}} \), the second-order magnetostatic tangent tensor \(\mathbb{D}\) and the third-order total referential magnetoelastic coupling tensor \(\mathfrak{P}^{\text{tot}}\). Irrespective of the parameterization of \(\mathbf{S}^{\text{tot}}\) and \(\boldsymbol{\mathbb{B}}\), these quantities may be computed by
\begin{align*} \mathcal{H}^{\text{tot}} &= 2 \frac{d \mathbf{S}^{\text{tot}}}{d \mathbf{C}} , \\ \mathbb{D} &= \frac{d \boldsymbol{\mathbb{B}}}{d \boldsymbol{\mathbb{H}}} , \\ \mathfrak{P}^{\text{tot}} &= - \frac{d \mathbf{S}^{\text{tot}}}{d \boldsymbol{\mathbb{H}}} , \\ \left[ \mathfrak{P}^{\text{tot}} \right]^{T} &= 2 \frac{d \boldsymbol{\mathbb{B}}}{d \mathbf{C}} . \end{align*}
For the case of rate-dependent materials, this expands to
\begin{align*} \mathcal{H}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) &= 4 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C} \otimes d \mathbf{C}} , \\ \mathbb{D} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) &= -\frac{d^{2} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} , \\ \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) &= - 2 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}} \otimes d \mathbf{C}} , \\ \left[ \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right) \right]^{T} &= - 2 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}^{i}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C} \otimes d \boldsymbol{\mathbb{H}}} , \end{align*}
while for rate-independent materials the linearizations are
\begin{align*} \mathcal{H}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &= 4 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \mathbf{C} \otimes d \mathbf{C}} , \\ \mathbb{D} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &= -\frac{d^{2} \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} , \\ \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &= - 2 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}} \otimes d \mathbf{C}} , \\ \left[ \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) \right]^{T} &= - 2 \frac{d^{2} \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \mathbf{C} \otimes d \boldsymbol{\mathbb{H}}} . \end{align*}
The subtle difference between them is the application of a partial derivative during the calculation of the first derivatives. We'll see later how this affects the choice of AD versus SD for this specific application. For now, we'll simply introduce the two specific materials that are implemented within this tutorial.
The first material that we'll consider is one that is governed by a magneto-hyperelastic constitutive law. This material responds to both deformation as well as immersion in a magnetic field, but exhibits no time- or history-dependent behavior (such as dissipation through viscous damping or magnetic hysteresis, etc.). The stored energy density function for such a material is only parameterized in terms of the (current) field variables, but not their time derivatives or past values.
We'll choose the energy density function, which captures both the energy stored in the material due to deformation and magnetization, as well as the energy stored in the magnetic field itself, to be
\[ \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = \frac{1}{2} \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
with
\[ f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \]
and for which the variable \(d = \text{tr}(\mathbf{I})\) ( \(\mathbf{I}\) being the rank-2 identity tensor) represents the spatial dimension and \(\mathbf{F}\) is the deformation gradient tensor. To give some brief background to the various components of \(\psi_{0}\), the first two terms bear a great resemblance to the stored energy density function for a (hyperelastic) Neohookean material. The only difference between what's used here and the Neohookean material is the scaling of the elastic shear modulus by the magnetic field-sensitive saturation function \(f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right)\) (see [146], equation 29). This function will, in effect, cause the material to stiffen in the presence of a strong magnetic field. As it is governed by a sigmoid-type function, the shear modulus will asymptotically converge on the specified saturation shear modulus. It can also be shown that the last term in \(\psi_{0}\) is the stored energy density function for magnetic field (as derived from first principles), scaled by the relative permeability constant. This definition collectively implies that the material is linearly magnetized, i.e., the magnetization vector and magnetic field vector are aligned. (This is certainly not obvious with the magnetic energy stated in its current form, but when the magnetic induction and magnetization are derived from \(\psi_{0}\) and all magnetic fields are expressed in the current configuration then this correlation becomes clear.) As for the specifics of what the magnetic induction, stress tensor, and the various material tangents look like, we'll defer presenting these to the tutorial body where the full, unassisted implementation of the constitutive law is defined.
The second material that we'll formulate is one for a magneto-viscoelastic material with a single dissipative mechanism i
. The free energy density function that we'll be considering is defined as
\begin{align*} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) &= \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) \\ \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &= \frac{1}{2} \mu_{e} f_{\mu_{e}^{ME}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \\ \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) &= \frac{1}{2} \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \mathbf{C}_{v} : \left[ \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right] - d - \ln\left( \text{det}\left(\mathbf{C}_{v}\right) \right) \right] \end{align*}
with
\[ f_{\mu_{e}}^{ME} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \]
\[ f_{\mu_{v}}^{MVE} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{v}^{\infty}}{\mu_{v}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{v}^{\text{sat}}\right)^{2}} \right) \]
and the evolution law
\[ \dot{\mathbf{C}}_{v} \left( \mathbf{C} \right) = \frac{1}{\tau} \left[ \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}\right]^{-1} - \mathbf{C}_{v} \right] \]
for the internal viscous variable. We've chosen the magnetoelastic part of the energy \(\psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)\) to match that of the first material model that we explored, so this part needs no further explanation. As for the viscous part \(\psi_{0}^{MVE}\), this component of the free energy (in conjunction with the evolution law for the viscous deformation tensor) is taken from [121] (with the additional scaling by the viscous saturation function described in [146]). It is derived in a thermodynamically consistent framework that, at its core, models the movement of polymer chains on a micro-scale level.
To proceed beyond this point, we'll also need to consider the time discretization of the evolution law. Choosing the implicit first-order backwards difference scheme, then
\[ \dot{\mathbf{C}}_{v} \approx \frac{\mathbf{C}_{v}^{(t)} - \mathbf{C}_{v}^{(t-1)}}{\Delta t} = \frac{1}{\tau} \left[ \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}\right]^{-1} - \mathbf{C}_{v}^{(t)} \right] \]
where the superscript \((t)\) denotes that the quantity is taken at the current timestep, and \((t-1)\) denotes quantities taken at the previous timestep (i.e., a history variable). The timestep size \(\Delta t\) is the difference between the current time and that of the previous timestep. Rearranging the terms so that all internal variable quantities at the current time are on the left hand side of the equation, we get
\[ \mathbf{C}_{v}^{(t)} = \frac{1}{1 + \frac{\Delta t}{\tau_{v}}} \left[ \mathbf{C}_{v}^{(t-1)} + \frac{\Delta t}{\tau_{v}} \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right]^{-1} \right] \]
that matches [121] equation 54.
Now that we have shown all of these formulas for the thermodynamics and theory governing magneto-mechanics and constitutive models, let us outline what the program will do with all of this. We wish to do something meaningful with the materials laws that we've formulated, and so it makes sense to subject them to some mechanical and magnetic loading conditions that are, in some way, representative of some conditions that might be found either in an application or in a laboratory setting. One way to achieve that aim would be to embed these constitutive laws in a finite element model to simulate a device. In this instance, though, we'll keep things simple (we are focusing on the automatic and symbolic differentiation concepts, after all) and will find a concise way to faithfully replicate an industry-standard rheological experiment using an analytical expression for the loading conditions.
The rheological experiment that we'll reproduce, which idealizes a laboratory experiment that was used to characterize magneto-active polymers, is detailed in [146] (as well as [145], in which it is documented along with the real-world experiments). The images below provide a visual description of the problem set up.
The basic functional geometry of the parallel-plate rotational rheometer. The smooth rotor (blue) applies a torque to an experimental sample (red) of radius \(r\) and height \(H\) while an axially aligned magnetic field generated by a a magneto-rheological device. Although the time-dependent deformation profile of the may be varied, one common experiment would be to subject the material to a harmonic torsional deformation of constant amplitude and frequency \(\omega\). |
Schematic of the kinematics of the problem, assuming no preloading or compression of the sample. A point \(\mathbf{P}\) located at azimuth \(\Theta\) is displaced to location \(\mathbf{p}\) at azimuth \(\theta = \Theta + \alpha\). |
Under the assumptions that an incompressible medium is being tested, and that the deformation profile through the sample thickness is linear, then the displacement at some measurement point \(\mathbf{X}\) within the sample, expressed in radial coordinates, is
\begin{align*} r(\mathbf{X}) &= \frac{R(X_{1}, X_{2})}{\sqrt{\lambda_{3}}} , \\ \theta(\mathbf{X}) & = \Theta(X_{1}, X_{2}) + \underbrace{\tau(t) \lambda_{3} X_{3}}_{\alpha(X_{3}, t)} , \\ z(\mathbf{X}) &= \lambda_{3} X_{3} \end{align*}
where \(R(X_{1}, X_{2})\) and \(\Theta(X_{1}, X_{2})\) are the radius at – and angle of – the sampling point, \(\lambda_{3}\) is the (constant) axial deformation, \(\tau(t) = \frac{A}{RH} \sin\left(\omega t\right)\) is the time-dependent torsion angle per unit length that will be prescribed using a sinusoidally repeating oscillation of fixed amplitude \(A\). The magnetic field is aligned axially, i.e., in the \(X_{3}\) direction.
This summarizes everything that we need to fully characterize the idealized loading at any point within the rheological sample. We'll set up the problem in such a way that we "pick" a representative point with this sample, and subject it to a harmonic shear deformation at a constant axial deformation (by default, a compressive load) and a constant, axially applied magnetic field. We will record the stress and magnetic induction at this point, and will output that data to file for post-processing. Although its not necessary for this particular problem, we will also be computing the tangents as well. Even though they are not directly used in this particular piece of work, these second derivatives are needed to embed the constitutive law within a finite element model (one possible extension to this work). We'll therefore take the opportunity to check our hand calculations for correctness using the assisted differentiation frameworks.
In addition to the already mentioned Automatic and symbolic differentiation module, the following are a few references that discuss in more detail
We start by including all the necessary deal.II header files and some C++ related ones. This first header will give us access to a data structure that will allow us to store arbitrary data within it.
Next come some core classes, including one that provides an implementation for time-stepping.
Then some headers that define some useful coordinate transformations and kinematic relationships that are often found in nonlinear elasticity.
The following two headers provide all of the functionality that we need to perform automatic differentiation, and use the symbolic computer algebra system that deal.II can utilize. The headers of all automatic differentiation and symbolic differentiation wrapper classes, and any ancillary data structures that are required, are all collected inside these unifying headers.
Including this header allows us the capability to write output to a file stream.
As per usual, the entire tutorial program is defined within its own unique namespace.
Automatic and symbolic differentiation have some magical and mystical qualities. Although their use in a project can be beneficial for a multitude of reasons, the barrier to understanding how to use these frameworks or how they can be leveraged may exceed the patience of the developer that is trying to (reliably) integrate them into their work.
Although it is the wish of the author to successfully illustrate how these tools can be integrated into workflows for finite element modelling, it might be best to first take a step back and start right from the basics. So to start off with, we'll first have a look at differentiating a "simple" mathematical function using both frameworks, so that the fundamental operations (both their sequence and function) can be firmly established and understood with minimal complication. In the second part of this tutorial we will put these fundamentals into practice and build on them further.
Accompanying the description of the algorithmic steps to use the frameworks will be a simplified view as to what they might be doing in the background. This description will be very much one designed to aid understanding, and the reader is encouraged to view the Automatic and symbolic differentiation module documentation for a far more formal description into how these tools actually work.
In order to convince the reader that these tools are indeed useful in practice, let us choose a function for which it is not too difficult to compute the analytical derivatives by hand. It's just sufficiently complicated to make you think about whether or not you truly want to go through with this exercise, and might also make you question whether you are completely sure that your calculations and implementation for its derivatives are correct. The point, of course, is that differentiation of functions is in a sense relatively formulaic and should be something computers are good at – if we could build on existing software that understands the rules, we wouldn't have to bother with doing it ourselves.
We choose the two variable trigonometric function \(f(x,y) = \cos\left(\frac{y}{x}\right)\) for this purpose. Notice that this function is templated on the number type. This is done because we can often (but not always) use special auto-differentiable and symbolic types as drop-in replacements for real or complex valued types, and these will then perform some elementary calculations, such as evaluate a function value along with its derivatives. We will exploit that property and make sure that we need only define our function once, and then it can be re-used in whichever context we wish to perform differential operations on it.
Rather than revealing this function's derivatives immediately, we'll forward declare functions that return them and defer their definition to later. As implied by the function names, they respectively return the derivatives \(\frac{df(x,y)}{dx}\):
\(\frac{df(x,y)}{dy}\):
\(\frac{d^{2}f(x,y)}{dx^{2}}\):
\(\frac{d^{2}f(x,y)}{dx dy}\):
\(\frac{d^{2}f(x,y)}{dy dx}\):
and, lastly, \(\frac{d^{2}f(x,y)}{dy^{2}}\):
To begin, we'll use AD as the tool to automatically compute derivatives for us. We will evaluate the function with the arguments x
and y
, and expect the resulting value and all of the derivatives to match to within the given tolerance.
Our function \(f(x,y)\) is a scalar-valued function, with arguments that represent the typical input variables that one comes across in algebraic calculations or tensor calculus. For this reason, the Differentiation::AD::ScalarFunction class is the appropriate wrapper class to use to do the computations that we require. (As a point of comparison, if the function arguments represented finite element cell degrees-of-freedom, we'd want to treat them differently.) The spatial dimension of the problem is irrelevant since we have no vector- or tensor-valued arguments to accommodate, so the dim
template argument is arbitrarily assigned a value of 1. The second template argument stipulates which AD framework will be used (deal.II has support for several external AD frameworks), and what the underlying number type provided by this framework is to be used. This number type influences the maximum order of the differential operation, and the underlying algorithms that are used to compute them. Given its template nature, this choice is a compile-time decision because many (but not all) of the AD libraries exploit compile-time meta-programming to implement these special number types in an efficient manner. The third template parameter states what the result type is; in our case, we're working with double
s.
It is necessary that we pre-register with our ADHelper
class how many arguments (what we will call "independent variables") the function \(f(x,y)\) has. Those arguments are x
and y
, so obviously there are two of them.
We now have sufficient information to create and initialize an instance of the helper class. We can also get the concrete number type that will be used in all subsequent calculations. This is useful, because we can write everything from here on by referencing this type, and if we ever want to change the framework used, or number type (e.g., if we need more differential operations) then we need only adjust the ADTypeCode
template parameter.
The next step is to register the numerical values of the independent variables with the helper class. This is done because the function and its derivatives will be evaluated for exactly these arguments. Since we register them in the order {x,y}
, the variable x
will be assigned component number 0
, and y
will be component 1
– a detail that will be used in the next few lines.
We now ask for the helper class to give to us the independent variables with their auto-differentiable representation. These are termed "sensitive variables", because from this point on any operations that we do with the components independent_variables_ad
are tracked and recorded by the AD framework, and will be considered when we ask for the derivatives of something that they're used to compute. What the helper returns is a vector
of auto-differentiable numbers, but we can be sure that the zeroth element represents x
and the first element y
. Just to make completely sure that there's no ambiguity of what number type these variables are, we suffix all of the auto-differentiable variables with ad
.
We can immediately pass in our sensitive representation of the independent variables to our templated function that computes \(f(x,y)\). This also returns an auto-differentiable number.
So now the natural question to ask is what we have actually just computed by passing these special x_ad
and y_ad
variables to the function f
, instead of the original double
variables x
and y
? In other words, how is all of this related to the computation of the derivatives that we were wanting to determine? Or, more concisely: What is so special about this returned ADNumberType
object that gives it the ability to magically return derivatives?
In essence, how this could be done is the following: This special number can be viewed as a data structure that stores the function value, and the prescribed number of derivatives. For a once-differentiable number expecting two arguments, it might look like this:
For our independent variable x_ad
, the starting value of x_ad.value
would simply be its assigned value (i.e., the real value of that this variable represents). The derivative x_ad.derivatives[0]
would be initialized to 1
, since x
is the zeroth independent variable and \(\frac{d(x)}{dx} = 1\). The derivative x.derivatives[1]
would be initialized to zero, since the first independent variable is y
and \(\frac{d(x)}{dy} = 0\).
For the function derivatives to be meaningful, we must assume that not only is this function differentiable in an analytical sense, but that it is also differentiable at the evaluation point x,y
. We can exploit both of these assumptions: when we use this number type in mathematical operations, the AD framework could overload the operations (e.g., operator+()
, operator*()
as well as sin()
, exp()
, etc.) such that the returned result has the expected value. At the same time, it would then compute the derivatives through the knowledge of exactly what function is being overloaded and rigorous application of the chain-rule. So, the sin()
function (with its argument a
itself being a function of the independent variables x
and y
) might be defined as follows:
All of that could of course also be done for second and even higher order derivatives.
So it is now clear that with the above representation the ADNumberType
is carrying around some extra data that represents the various derivatives of differentiable functions with respect to the original (sensitive) independent variables. It should therefore be noted that there is computational overhead associated with using them (as we compute extra functions when doing derivative computations) as well as memory overhead in storing these results. So the prescribed number of levels of differential operations should ideally be kept to a minimum to limit computational cost. We could, for instance, have computed the first derivatives ourself and then have used the Differentiation::AD::VectorFunction helper class to determine the gradient of the collection of dependent functions, which would be the second derivatives of the original scalar function.
It is also worth noting that because the chain rule is indiscriminately applied and we only see the beginning and end-points of the calculation {x,y}
\(\rightarrow\) f(x,y)
, we will only ever be able to query the total derivatives of f
; the partial derivatives (a.derivatives[0]
and a.derivatives[1]
in the above example) are intermediate values and are hidden from us.
Okay, since we now at least have some idea as to exactly what f_ad
represents and what is encoded within it, let's put all of that to some actual use. To gain access to those hidden derivative results, we register the final result with the helper class. After this point, we can no longer change the value of f_ad
and have those changes reflected in the results returned by the helper class.
The next step is to extract the derivatives (specifically, the function gradient and Hessian). To do so we first create some temporary data structures (with the result type double
) to store the derivatives (noting that all derivatives are returned at once, and not individually)...
... and we then request that the helper class compute these derivatives, and the function value itself. And that's it. We have everything that we were aiming to get.
We can convince ourselves that the AD framework is correct by comparing it to the analytical solution. (Or, if you're like the author, you'll be doing the opposite and will rather verify that your implementation of the analytical solution is correct!)
Because we know the ordering of the independent variables, we know which component of the gradient relates to which derivative...
... and similar for the Hessian.
That's pretty great. There wasn't too much work involved in computing second-order derivatives of this trigonometric function.
Since we now know how much "implementation effort" it takes to have the AD framework compute those derivatives for us, let's compare that to the same computed by hand and implemented in several stand-alone functions.
Here are the two first derivatives of \(f(x,y) = \cos\left(\frac{y}{x}\right)\):
\(\frac{df(x,y)}{dx} = \frac{y}{x^2} \sin\left(\frac{y}{x}\right)\)
\(\frac{df(x,y)}{dx} = -\frac{1}{x} \sin\left(\frac{y}{x}\right)\)
And here are the four second derivatives of \(f(x,y)\):
\(\frac{d^{2}f(x,y)}{dx^{2}} = -\frac{y}{x^4} (2x \sin\left(\frac{y}{x}\right) + y \cos\left(\frac{y}{x}\right))\)
\(\frac{d^{2}f(x,y)}{dx dy} = \frac{1}{x^3} (x \sin\left(\frac{y}{x}\right) + y \cos\left(\frac{y}{x}\right))\)
\(\frac{d^{2}f(x,y)}{dy dx} = \frac{1}{x^3} (x \sin\left(\frac{y}{x}\right) + y \cos\left(\frac{y}{x}\right))\) (as expected, on the basis of Schwarz's theorem)
\(\frac{d^{2}f(x,y)}{dy^{2}} = -\frac{1}{x^2} \cos\left(\frac{y}{x}\right)\)
Hmm... there's a lot of places in the above where we could have introduced an error in the above, especially when it comes to applying the chain rule. Although they're no silver bullet, at the very least these AD frameworks can serve as a verification tool to make sure that we haven't made any errors (either by calculation or by implementation) that would negatively affect our results.
The point of this example of course is that we might have chosen a relatively simple function \(f(x,y)\) for which we can hand-verify that the derivatives the AD framework computed is correct. But the AD framework didn't care that the function was simple: It could have been a much much more convoluted expression, or could have depended on more than two variables, and it would still have been able to compute the derivatives – the only difference would have been that we wouldn't have been able to come up with the derivatives any more to verify correctness of the AD framework.
We'll now repeat the same exercise using symbolic differentiation. The term "symbolic differentiation" is a little bit misleading because differentiation is just one tool that the Computer Algebra System (CAS) (i.e., the symbolic framework) provides. Nevertheless, in the context of finite element modeling and applications it is the most common use of a CAS and will therefore be the one that we'll focus on. Once more, we'll supply the argument values x
and y
with which to evaluate our function \(f(x,y) = \cos\left(\frac{y}{x}\right)\) and its derivatives, and a tolerance with which to test the correctness of the returned results.
The first step that we need to take is to form the symbolic variables that represent the function arguments that we wish to differentiate with respect to. Again, these will be the independent variables for our problem and as such are, in some sense, primitive variables that have no dependencies on any other variable. We create these types of (independent) variables by initializing a symbolic type Differentiation::SD::Expression, which is a wrapper to a set of classes used by the symbolic framework, with a unique identifier. On this occasion it makes sense that this identifier, a std::string
, be simply "x"
for the \(x\) argument, and likewise "y"
for the \(y\) argument to the dependent function. Like before, we'll suffix symbolic variable names with sd
so that we can clearly see which variables are symbolic (as opposed to numeric) in nature.
Using the templated function that computes \(f(x,y)\), we can pass these independent variables as arguments to the function. The returned result will be another symbolic type that represents the sequence of operations used to compute \(\cos\left(\frac{y}{x}\right)\).
At this point it is legitimate to print out the expression f_sd
, and if we did so
we would see f(x,y) = cos(y/x)
printed to the console.
You might notice that we've constructed our symbolic function f_sd
with no context as to how we might want to use it: In contrast to the AD approach shown above, what we were returned from calling f(x_sd, y_sd)
is not the evaluation of the function f
at some specific point, but is in fact a symbolic representation of the evaluation at a generic, as yet undetermined, point. This is one of the key points that makes symbolic frameworks (the CAS) different from automatic differentiation frameworks. Each of the variables x_sd
and y_sd
, and even the composite dependent function f_sd
, are in some sense respectively "placeholders" for numerical values and a composition of operations. In fact, the individual components that are used to compose the function are also placeholders. The sequence of operations are encoded into in a tree-like data structure (conceptually similar to an abstract syntax tree).
Once we form these data structures we can defer any operations that we might want to do with them until some later time. Each of these placeholders represents something, but we have the opportunity to define or redefine what they represent at any convenient point in time. So for this particular problem it makes sense that we somehow want to associate "x" and "y" with some numerical value (with type yet to be determined), but we could conceptually (and if it made sense) assign the ratio "y/x" a value instead of the variables "x" and "y" individually. We could also associate with "x" or "y" some other symbolic function g(a,b)
. Any of these operations involves manipulating the recorded tree of operations, and substituting the salient nodes on the tree (and that nodes' subtree) with something else. The key word here is "substitution", and indeed there are many functions in the Differentiation::SD namespace that have this word in their names.
This capability makes the framework entirely generic. In the context of finite element simulations, the types of operations that we would typically perform with our symbolic types are function composition, differentiation, substitution (partial or complete), and evaluation (i.e., conversion of the symbolic type to its numerical counterpart). But should you need it, a CAS is often capable of more than just this: It could be forming anti-derivatives (integrals) of functions, perform simplifications on the expressions that form a function (e.g., replace \((\sin a)^2 + (\cos a)^2\) by \(1\); or, more simply: if the function did an operation like 1+2
, a CAS could replace it by 3
), and so forth: The expression that a variable represents is obtained from how the function \(f\) is implemented, but a CAS can do with it whatever its functionality happens to be.
Specifically, to compute the symbolic representation of the first derivatives of the dependent function with respect to its individual independent variables, we use the Differentiation::SD::Expression::differentiate() function with the independent variable given as its argument. Each call will cause the CAS to go through the tree of operations that compose f_sd
and differentiate each node of the expression tree with respect to the given symbolic argument.
To compute the symbolic representation of the second derivatives, we simply differentiate the first derivatives with respect to the independent variables. So to compute a higher order derivative, we first need to compute the lower order derivative. (As the return type of the call to differentiate()
is an expression, we could in principal execute double differentiation directly from the scalar by chaining two calls together. But this is unnecessary in this particular case, since we have the intermediate results at hand.)
Printing the expressions for the first and second derivatives, as computed by the CAS, using the statements
renders the following output:
This compares favorably to the analytical expressions for these derivatives that were presented earlier.
Now that we have formed the symbolic expressions for the function and its derivatives, we want to evaluate them for the numeric values for the main function arguments x
and y
. To accomplish this, we construct a substitution map, which maps the symbolic values to their numerical counterparts.
The last step in the process is to convert all symbolic variables and operations into numerical values, and produce the numerical result of this operation. To do this we combine the substitution map with the symbolic variable in the step we have already mentioned above: "substitution".
Once we pass this substitution map to the CAS, it will substitute each instance of the symbolic variable (or, more generally, sub-expression) with its numerical counterpart and then propagate these results up the operation tree, simplifying each node on the tree if possible. If the tree is reduced to a single value (i.e., we have substituted all of the independent variables with their numerical counterpart) then the evaluation is complete.
Due to the strongly-typed nature of C++, we need to instruct the CAS to convert its representation of the result into an intrinsic data type (in this case a double
). This is the "evaluation" step, and through the template type we define the return type of this process. Conveniently, these two steps can be done at once if we are certain that we've performed a full substitution.
We can do the same for the first derivatives...
... and the second derivatives. Notice that we can reuse the same substitution map for each of these operations because we wish to evaluate all of these functions for the same values of x
and y
. Modifying the values in the substitution map renders the result of same symbolic expression evaluated with different values being assigned to the independent variables. We could also happily have each variable represent a real value in one pass, and a complex value in the next.
The function used to drive these initial examples is straightforward. We'll arbitrarily choose some values at which to evaluate the function (although knowing that x = 0
is not permissible), and then pass these values to the functions that use the AD and SD frameworks.
Now that we've introduced the principles behind automatic and symbolic differentiation, we'll put them into action by formulating two coupled magneto-mechanical constitutive laws: one that is rate-independent, and another that exhibits rate-dependent behavior.
As you will recall from the introduction, the material constitutive laws we will consider are far more complicated than the simple example above. This is not just because of the form of the function \(\psi_{0}\) that we will consider, but in particular because \(\psi_{0}\) doesn't just depend on two scalar variables, but instead on a whole bunch of tensors, each with several components. In some cases, these are symmetric tensors, for which only a subset of components is in fact independent, and one has to think about what it actually means to compute a derivative such as \(\frac{\partial\psi_{0}}{\partial \mathbf{C}}\) where \(\mathbf C\) is a symmetric tensor. How all of this will work will, hopefully, become clear below. It will also become clear that doing this by hand is going to be, at the very best, exceedingly tedious and, at worst, riddled with hard-to-find bugs.
We start with a description of the various material parameters that appear in the description of the energy function \(\psi_{0}\).
The ConstitutiveParameters class is used to hold these values. Values for all parameters (both constitutive and rheological) are taken from [146], and are given values that produce a constitutive response that is broadly representative of a real, laboratory-made magneto-active polymer, though the specific values used here are of no consequence to the purpose of this program of course.
The first four constitutive parameters respectively represent
The next four, which only pertain to the rate-dependent material, are parameters for
The last parameter is the relative magnetic permeability \(\mu_{r}\).
The parameters are initialized through the ParameterAcceptor framework, which is discussed in detail in step-60.
Since we'll be formulating two constitutive laws for the same class of materials, it makes sense to define a base class that ensures a unified interface to them.
The class declaration starts with the constructor that will accept the set of constitutive parameters that, in conjunction with the material law itself, dictate the material response.
Instead of computing and returning the kinetic variables or their linearization at will, we'll calculate and store these values within a single method. These cached results will then be returned upon request. We'll defer the precise explanation as to why we'd want to do this to a later stage. What is important for now is to see that this function accepts all of the field variables, namely the magnetic field vector \(\boldsymbol{\mathbb{H}}\) and right Cauchy-Green deformation tensor \(\mathbf{C}\), as well as the time discretizer. These, in addition to the constitutive_parameters
, are all the fundamental quantities that are required to compute the material response.
The next few functions provide the interface to probe the material response due subject to the applied deformation and magnetic loading.
Since the class of materials can be expressed in terms of a free energy \(\psi_{0}\), we can compute that...
... as well as the two kinetic quantities:
... and the linearization of the kinetic quantities, which are:
We'll also define a method that provides a mechanism for this class instance to do any additional tasks before moving on to the next timestep. Again, the reason for doing this will become clear a little later.
In the protected
part of the class, we store a reference to an instance of the constitutive parameters that govern the material response. For convenience, we also define some functions that return various constitutive parameters (both explicitly defined, as well as calculated).
The parameters related to the elastic response of the material are, in order:
The parameters related to the elastic response of the material are, in order:
The parameters related to the magnetic response of the material are, in order:
We'll also implement a function that returns the timestep size from the time discretizion.
In the following, let us start by implementing the several relatively trivial member functions of the class just defined:
We'll begin by considering a non-dissipative material, namely one that is governed by a magneto-hyperelastic constitutive law that exhibits stiffening when immersed in a magnetic field. As described in the introduction, the stored energy density function for such a material might be given by
\[ \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = \frac{1}{2} \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
with
\[ f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) . \]
Now on to the class that implements this behavior. Since we expect that this class fully describes a single material, we'll mark it as "final" so that the inheritance tree terminated here. At the top of the class, we define the helper type that we will use in the AD computations for our scalar energy density function. Note that we expect it to return values of type double
. We also have to specify the number of spatial dimensions, dim
, so that the link between vector, tensor and symmetric tensor fields and the number of components that they contain may be established. The concrete ADTypeCode
used for the ADHelper class will be provided as a template argument at the point where this class is actually used.
Since the public interface to the base class is pure-virtual
, here we'll declare that this class will override all of these base class methods.
In the private
part of the class, we need to define some extractors that will help us set independent variables and later get the computed values related to the dependent variables. If this class were to be used in the context of a finite element problem, then each of these extractors is (most likely) related to the gradient of a component of the solution field (in this case, displacement and magnetic scalar potential). As you can probably infer by now, here "C" denotes the right Cauchy-Green tensor and "H" denotes the magnetic field vector.
This is an instance of the automatic differentiation helper that we'll set up to do all of the differential calculations related to the constitutive law...
... and the following three member variables will store the output from the ad_helper
. The ad_helper
returns the derivatives with respect to all field variables at once, so we'll retain the full gradient vector and Hessian matrix. From that, we'll extract the individual entries that we're actually interested in.
When setting up the field component extractors, it is completely arbitrary as to how they are ordered. But it is important that the extractors do not have overlapping indices. The total number of components of these extractors defines the number of independent variables that the ad_helper
needs to track, and with respect to which we'll be taking derivatives. The resulting data structures Dpsi
and D2psi
must also be sized accordingly. Once the ad_helper
is configured (its input argument being the total number of components of \(\mathbf{C}\) and \(\boldsymbol{\mathbb{H}}\)), we can directly interrogate it as to how many independent variables it uses.
As stated before, due to the way that the automatic differentiation libraries work, the ad_helper
will always returns the derivatives of the energy density function with respect to all field variables simultaneously. For this reason, it does not make sense to compute the derivatives in the functions get_B()
, get_S()
, etc. because we'd be doing a lot of extra computations that are then simply discarded. So, the best way to deal with that is to have a single function call that does all of the calculations up-front, and then we extract the stored data as its needed. That's what we'll do in the update_internal_data()
method. As the material is rate-independent, we can ignore the DiscreteTime argument.
Since we reuse the ad_helper
data structure at each time step, we need to clear it of all stale information before use.
The next step is to set the values for all field components. These define the "point" around which we'll be computing the function gradients and their linearization. The extractors that we created before provide the association between the fields and the registry within the ad_helper
– they'll be used repeatedly to ensure that we have the correct interpretation of which variable corresponds to which component of H
or C
.
Now that we've done the initial setup, we can retrieve the AD counterparts of our fields. These are truly the independent variables for the energy function, and are "sensitive" to the calculations that are performed with them. Notice that the AD number are treated as a special number type, and can be used in many templated classes (in this example, as the scalar type for the Tensor and SymmetricTensor class).
We can also use them in many functions that are templated on the scalar type. So, for these intermediate values that we require, we can perform tensor operations and some mathematical functions. The resulting type will also be an automatically differentiable number, which encodes the operations performed in these functions.
Next we'll compute the scaling function that will cause the shear modulus to change (increase) under the influence of a magnetic field...
... and then we can define the material stored energy density function. We'll see later that this example is sufficiently complex to warrant the use of AD to, at the very least, verify an unassisted implementation.
The stored energy density function is, in fact, the dependent variable for this problem, so as a final step in the "configuration" phase, we register its definition with the ad_helper
.
Finally, we can retrieve the resulting value of the stored energy density function, as well as its gradient and Hessian with respect to the input fields, and cache them.
The following few functions then allow for querying the so-stored value of \(\psi_{0}\), and to extract the desired components of the gradient vector and Hessian matrix. We again make use of the extractors to express which parts of the total gradient vector and Hessian matrix we wish to retrieve. They only return the derivatives of the energy function, so for our definitions of the kinetic variables and their linearization a few more manipulations are required to form the desired result.
Note that for coupled terms the order of the extractor arguments is especially important, as it dictates the order in which the directional derivatives are taken. So, if we'd reversed the order of the extractors in the call to extract_hessian_component()
then we'd actually have been retrieving part of \(\left[ \mathfrak{P}^{\text{tot}}
\right]^{T}\).
The second material law that we'll consider will be one that represents a magneto-viscoelastic material with a single dissipative mechanism. We'll consider the free energy density function for such a material to be defined as
\begin{align*} \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) &= \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) \\ \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) &= \frac{1}{2} \mu_{e} f_{\mu_{e}^{ME}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \\ \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) &= \frac{1}{2} \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \mathbf{C}_{v} : \left[ \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right] - d - \ln\left( \text{det}\left(\mathbf{C}_{v}\right) \right) \right] \end{align*}
with
\[ f_{\mu_{e}}^{ME} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \]
\[ f_{\mu_{v}}^{MVE} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{v}^{\infty}}{\mu_{v}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{v}^{\text{sat}}\right)^{2}} \right), \]
in conjunction with the evolution law for the internal viscous variable
\[ \mathbf{C}_{v}^{(t)} = \frac{1}{1 + \frac{\Delta t}{\tau_{v}}} \left[ \mathbf{C}_{v}^{(t-1)} + \frac{\Delta t}{\tau_{v}} \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right]^{-1} \right] \]
that was discretized using a first-order backward difference approximation.
Again, let us see how this is implemented in a concrete class. Instead of the AD framework used in the previous class, we will now utilize the SD approach. To support this, the class constructor accepts not only the constitutive_parameters
, but also two additional variables that will be used to initialize a Differentiation::SD::BatchOptimizer. We'll give more context to this later.
Like for the automatic differentiation helper, the Differentiation::SD::BatchOptimizer will return a collection of results all at once. So, in order to do that just once, we'll utilize a similar approach to before and do all of the expensive calculations within the update_internal_data()
function, and cache the results for layer extraction.
Since we're dealing with a rate dependent material, we'll have to update the history variable at the appropriate time. That will be the purpose of this function.
In the private
part of the class, we will want to keep track of the internal viscous deformation, so the following two (real-valued, non-symbolic) member variables respectively hold
(We've labeled these variables "Q" so that they're easy to identify; in a sea of calculations it is not necessarily easy to distinguish Cv
or C_v
from C
.)
As we'll be using symbolic types, we'll need to define some symbolic variables to use with the framework. (They are all suffixed with "SD" to make it easy to distinguish the symbolic types or expressions from real-valued types or scalars.) This can be done once up front (potentially even as static
variables) to minimize the overhead associated with creating these variables. For the ultimate in generic programming, we can even describe the constitutive parameters symbolically, potentially allowing a single class instance to be reused with different inputs for these values too.
These are the symbolic scalars that represent the elastic, viscous, and magnetic material parameters (defined mostly in the same order as they appear in the ConstitutiveParameters
class). We also store a symbolic expression, delta_t_sd
, that represents the time step size):
Next we define some tensorial symbolic variables that represent the independent field variables, upon which the energy density function is parameterized:
And similarly we have the symbolic representation of the internal viscous variables (both its current value and its value at the previous timestep):
We should also store the definitions of the dependent expressions: Although we'll only compute them once, we require them to retrieve data from the optimizer
that is declared below. Furthermore, when serializing a material class like this one (not done as a part of this tutorial) we'd either need to serialize these expressions as well or we'd need to reconstruct them upon reloading.
The next variable is then the optimizer that is used to evaluate the dependent functions. More specifically, it provides the possibility to accelerate the evaluation of the symbolic dependent expressions. This is a vital tool, because the native evaluation of lengthy expressions (using no method of acceleration, but rather direct evaluation directly of the symbolic expressions) can be very slow. The Differentiation::SD::BatchOptimizer class provides a mechanism by which to transform the symbolic expression tree into another code path that, for example, shares intermediate results between the various dependent expressions (meaning that these intermediate values only get calculated once per evaluation) and/or compiling the code using a just-in-time compiler (thereby retrieving near-native performance for the evaluation step).
Performing this code transformation is very computationally expensive, so we store the optimizer so that it is done just once per class instance. This also further motivates the decision to make the constitutive parameters themselves symbolic. We could then reuse a single instance of this optimizer
across several materials (with the same energy function, of course) and potentially multiple continuum points (if embedded within a finite element simulation).
As specified by the template parameter, the numerical result will be of type double
.
During the evaluation phase, we must map the symbolic variables to their real-valued counterparts. The next method will provide this functionality.
The final method of this class will configure the optimizer
.
As the resting deformation state is one at which the material is considered to be completely relaxed, the internal viscous variables are initialized with the identity tensor, i.e. \(\mathbf{C}_{v} = \mathbf{I}\). The various symbolic variables representing the constitutive parameters, time step size, and field and internal variables all get a unique identifier. The optimizer is passed the two parameters that declare which optimization (acceleration) technique should be applied, as well as which additional steps should be taken by the CAS to help improve performance during evaluation.
The substitution map simply pairs all of the following data together:
Magnetoviscoelastic_Constitutive_Law_SD
instance), andDue to the "natural" use of the symbolic expressions, much of the procedure to configure the optimizer
looks very similar to that which is used to construct the automatic differentiation helper. Nevertheless, we'll detail these steps again to highlight the differences that underlie the two frameworks.
The function starts with expressions that symbolically encode the determinant of the deformation gradient (as expressed in terms of the right Cauchy-Green deformation tensor, our primary field variable), as well as the inverse of \(\mathbf{C}\) itself:
Next is the symbolic representation of the saturation function for the elastic part of the free energy density function, followed by the magnetoelastic contribution to the free energy density function. This all has the same structure as we'd seen previously.
In addition, we define the magneto-viscoelastic contribution to the free energy density function. The first component required to implement this is a scaling function that will cause the viscous shear modulus to change (increase) under the influence of a magnetic field (see [146], equation 29). Thereafter we can compute the dissipative component of the energy density function; its expression is stated in [146] (equation 28), which is a straight-forward extension of an energy density function formulated in [121] (equation 46).
From these building blocks, we can then define the material's total free energy density function:
As it stands, to the CAS the variable Q_t_sd
appears to be independent of C_sd
. Our tensorial symbolic expression Q_t_sd
just has an identifier associated with it, and there is nothing that links it to the other tensorial symbolic expression C_sd
. So any derivatives taken with respect to C_sd
will ignore this inherent dependence which, as we can see from the evolution law, is in fact \(\mathbf{C}_{v} = \mathbf{C}_{v} \left( \mathbf{C}, t \right)\). This means that deriving any function \(f = f(\mathbf{C}, \mathbf{Q})\) with respect to \(\mathbf{C}\) will return partial derivatives \(\frac{\partial f(\mathbf{C}, \mathbf{Q})}{\partial \mathbf{C}}
\Big\vert_{\mathbf{Q}}\) as opposed to the total derivative \(\frac{d f(\mathbf{C}, \mathbf{Q}(\mathbf{C}))}{d \mathbf{C}} =
\frac{\partial f(\mathbf{C}, \mathbf{Q}(\mathbf{C}))}{\partial
\mathbf{C}} \Big\vert_{\mathbf{Q}} + \frac{\partial f(\mathbf{C},
\mathbf{Q}(\mathbf{C}))}{\partial \mathbf{Q}}
\Big\vert_{\mathbf{C}} : \frac{d \mathbf{Q}(\mathbf{C}))}{d
\mathbf{C}}\).
By contrast, with the current AD libraries the total derivative would always be returned. This implies that the computed kinetic variables would be incorrect for this class of material model, making AD the incorrect tool from which to derive (at the continuum point level) the constitutive law for this dissipative material from an energy density function.
It is this specific level of control that characterizes a defining difference difference between the SD and AD frameworks. In a few lines we'll be manipulating the expression for the internal variable Q_t_sd
such that it produces the correct linearization.
But, first, we'll compute the symbolic expressions for the kinetic variables, i.e., the magnetic induction vector and the Piola-Kirchhoff stress tensor. The code that performs the differentiation quite closely mimics the definition stated in the theory.
Since the next step is to linearize the above, it is the appropriate time to inform the CAS of the explicit dependency of Q_t_sd
on C_sd
, i.e., state that \(\mathbf{C}_{v} = \mathbf{C}_{v} \left( \mathbf{C}, t
\right)\). This means that all future differential operations made with respect to C_sd
will take into account this dependence (i.e., compute total derivatives). In other words, we will transform some expression such that their intrinsic parameterization changes from \(f(\mathbf{C}, \mathbf{Q})\) to \(f(\mathbf{C}, \mathbf{Q}(\mathbf{C}))\).
To do this, we consider the time-discrete evolution law. From that, we have the explicit expression for the internal variable in terms of its history as well as the primary field variable. That is what it described in this expression:
Next we produce an intermediate substitution map, which will take every instance of Q_t_sd
(our identifier) found in an expression and replace it with the full expression held in Q_t_sd_explicit
.
We can the perform this substitution on the two kinetic variables and immediately differentiate the result that appears after that substitution with the field variables. (If you'd like, this could be split up into two steps with the intermediate results stored in a temporary variable.) Again, if you overlook the "complexity" generated by the substitution, these calls that linearize the kinetic variables and produce the three tangent tensors quite closely resembles what's stated in the theory.
Now we need to tell the optimizer
what entries we need to provide numerical values for in order for it to successfully perform its calculations. These essentially act as the input arguments to all dependent functions that the optimizer
must evaluate. They are, collectively, the independent variables for the problem, the history variables, the time step size and the constitutive parameters (since we've not hard encoded them in the energy density function).
So what we really want is to provide it a collection of symbols, which one could accomplish in this way:
But this is all actually already encoded as the keys of the substitution map. Doing the above would also mean that we need to manage the symbols in two places (here and when constructing the substitution map), which is annoying and a potential source of error if this material class is modified or extended. Since we're not interested in the values at this point, it is alright if the substitution map is filled with invalid data for the values associated with each key entry. So we'll simply create a fake substitution map, and extract the symbols from that. Note that any substitution map passed to the optimizer
will have to, at the very least, contain entries for these symbols.
We then inform the optimizer of what values we want calculated, which in our situation encompasses all of the dependent variables (namely the energy density function and its various derivatives).
The last step is to finalize the optimizer. With this call it will determine an equivalent code path that will evaluate all of the dependent functions at once, but with less computational cost than when evaluating the symbolic expression directly. Note: This is an expensive call, so we want execute it as few times as possible. We've done it in the constructor of our class, which achieves the goal of being called only once per class instance.
Since the configuration of the optimizer
was done up front, there's very little to do each time we want to compute kinetic variables or their linearization (derivatives).
To update the internal history variable, we first need to compute a few fundamental quantities, which we've seen before. We can also ask the time discretizer for the time step size that was used to iterate from the previous time step to the current one.
Now we can update the (real valued) internal viscous deformation tensor, as per the definition given by the evolution law in conjunction with the chosen time discretization scheme.
Next we pass the optimizer the numeric values that we wish the independent variables, time step size and (implicit to this call), the constitutive parameters to represent.
When making this next call, the call path used to (numerically) evaluate the dependent functions is quicker than dictionary substitution.
Having called update_internal_data()
, it is then valid to extract data from the optimizer. When doing the evaluation, we need the exact symbolic expressions of the data to extracted from the optimizer. The implication of this is that we needed to store the symbolic expressions of all dependent variables for the lifetime of the optimizer (naturally, the same is implied for the input variables).
When moving forward in time, the "current" state of the internal variable instantaneously defines the state at the "previous" timestep. As such, we record value of history variable for use as the "past value" at the next time step.
Now that we've seen how the AD and SD frameworks can make light(er) work of defining these constitutive laws, we'll implement the equivalent classes by hand for the purpose of verification and to do some preliminary benchmarking of the frameworks versus a native implementation.
At the expense of the author's sanity, what is documented below (hopefully accurately) are the full definitions for the kinetic variables and their tangents, as well as some intermediate computations. Since the structure and design of the constitutive law classes has been outlined earlier, we'll gloss over it and simply delineate between the various stages of calculations in the update_internal_data()
method definition. It should be easy enough to link the derivative calculations (with their moderately expressive variable names) to their documented definitions that appear in the class descriptions. We will, however, take the opportunity to present two different paradigms for implementing constitutive law classes. The second will provide more flexibility than the first (thereby making it more easily extensible, in the author's opinion) at the expense of some performance.
From the stored energy that, as mentioned earlier, is defined as
\[ \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = \frac{1}{2} \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
with
\[ f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) , \\ \text{det}(\mathbf{F}) = \sqrt{\text{det}(\mathbf{C})} \]
for this magnetoelastic material, the first derivatives that correspond to the magnetic induction vector and total Piola-Kirchhoff stress tensor are
\[ \boldsymbol{\mathbb{B}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq - \frac{d \psi_{0}}{d \boldsymbol{\mathbb{H}}} = - \frac{1}{2} \mu_{e} \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] \frac{d f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} + \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
\begin{align} \mathbf{S}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq 2 \frac{d \psi_{0} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \mathbf{C}} &= \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \frac{d\,\text{tr}(\mathbf{C})}{d \mathbf{C}} - 2 \frac{1}{\text{det}(\mathbf{F})} \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} \right] + 4 \lambda_{e} \ln \left(\text{det}(\mathbf{F}) \right) \frac{1}{\text{det}(\mathbf{F})} \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} - \mu_{0} \mu_{r} \left[ \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} + \text{det}(\mathbf{F}) \frac{d \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \mathbf{C}} \right] \\ &= \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \mathbf{I} - \mathbf{C}^{-1} \right] + 2 \lambda_{e} \ln \left(\text{det}(\mathbf{F}) \right) \mathbf{C}^{-1} - \mu_{0} \mu_{r} \left[ \frac{1}{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \text{det}(\mathbf{F}) \mathbf{C}^{-1} - \text{det}(\mathbf{F}) \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \right] \end{align}
with
\[ \frac{d f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} = \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \text{sech}^{2} \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \left[ \frac{4} {\left(h_{e}^{\text{sat}}\right)^{2}} \boldsymbol{\mathbb{H}} \right] \]
\[ \frac{d\,\text{tr}(\mathbf{C})}{d \mathbf{C}} = \mathbf{I} \quad \text{(the second-order identity tensor)} \]
\[ \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} = \frac{1}{2} \text{det}(\mathbf{F}) \mathbf{C}^{-1} \]
\[ \frac{d C^{-1}_{ab}}{d C_{cd}} = - \text{sym} \left( C^{-1}_{ac} C^{-1}_{bd} \right) = -\frac{1}{2} \left[ C^{-1}_{ac} C^{-1}_{bd} + C^{-1}_{ad} C^{-1}_{bc} \right] \]
\[ \frac{d \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \mathbf{C}} = - \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
The use of the symmetry operator \(\text{sym} \left( \bullet \right)\) in the one derivation above helps to ensure that the resulting rank-4 tensor, which holds minor symmetries due to the symmetry of \(\mathbf{C}\), still maps rank-2 symmetric tensors to rank-2 symmetric tensors. See the SymmetricTensor class documentation and the introduction to step-44 and for further explanation as to what symmetry means in the context of fourth-order tensors.
The linearization of each of the kinematic variables with respect to their arguments are
\[ \mathbb{D} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = \frac{d \boldsymbol{\mathbb{B}}}{d \boldsymbol{\mathbb{H}}} = - \frac{1}{2} \mu_{e} \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] \frac{d^{2} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} + \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \mathbf{C}^{-1} \]
\begin{align} \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = - \frac{d \mathbf{S}^{\text{tot}}}{d \boldsymbol{\mathbb{H}}} &= - \mu_{e} \left[ \frac{d\,\text{tr}(\mathbf{C})}{d \mathbf{C}} - 2 \frac{1}{\text{det}(\mathbf{F})} \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} \right] \otimes \frac{d f_{\mu_{e} \left( \boldsymbol{\mathbb{H}} \right)}}{d \boldsymbol{\mathbb{H}}} + \mu_{0} \mu_{r} \left[ \frac{d\,\text{det}(\mathbf{F})}{d \mathbf{C}} \otimes \frac{d \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \boldsymbol{\mathbb{H}}} \right] + \text{det}(\mathbf{F}) \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \mathbf{C} \otimes d \boldsymbol{\mathbb{H}}} \\ &= - \mu_{e} \left[ \mathbf{I} - \mathbf{C}^{-1} \right] \otimes \frac{d f_{\mu_{e} \left( \boldsymbol{\mathbb{H}} \right)}}{d \boldsymbol{\mathbb{H}}} + \mu_{0} \mu_{r} \left[ \text{det}(\mathbf{F}) \mathbf{C}^{-1} \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \right] + \text{det}(\mathbf{F}) \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \mathbf{C} \otimes \mathbf{C} \boldsymbol{\mathbb{H}}} \end{align}
\begin{align} \mathcal{H}^{\text{tot}} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = 2 \frac{d \mathbf{S}^{\text{tot}}}{d \mathbf{C}} &= 2 \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ - \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] + 4 \lambda_{e} \left[ \mathbf{C}^{-1} \otimes \left[ \frac{1}{\text{det}(\mathbf{F})} \frac{d \, \text{det}(\mathbf{F})}{d \mathbf{C}} \right] + \ln \left(\text{det}(\mathbf{F}) \right) \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] \\ &- \mu_{0} \mu_{r} \left[ \text{det}(\mathbf{F}) \mathbf{C}^{-1} \otimes \frac{d \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \mathbf{C}} + \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \mathbf{C}^{-1} \otimes \frac{d \, \text{det}(\mathbf{F})}{d \mathbf{C}} + \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \text{det}(\mathbf{F}) \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] \\ &+ 2 \mu_{0} \mu_{r} \left[ \left[ \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \right] \otimes \frac{d \, \text{det}(\mathbf{F})}{d \mathbf{C}} - \text{det}(\mathbf{F}) \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}}\right]}{d \mathbf{C} \otimes d \mathbf{C}} \right] \\ &= 2 \mu_{e} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right) \left[ - \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] + 4 \lambda_{e} \left[ \frac{1}{2} \mathbf{C}^{-1} \otimes \mathbf{C}^{-1} + \ln \left(\text{det}(\mathbf{F}) \right) \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] \\ &- \mu_{0} \mu_{r} \left[ - \text{det}(\mathbf{F}) \mathbf{C}^{-1} \otimes \left[ \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \right] + \frac{1}{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \text{det}(\mathbf{F}) \mathbf{C}^{-1} \otimes \mathbf{C}^{-1} + \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \text{det}(\mathbf{F}) \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] \\ &+ 2 \mu_{0} \mu_{r} \left[ \frac{1}{2} \text{det}(\mathbf{F}) \left[ \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \right] \otimes \mathbf{C}^{-1} - \text{det}(\mathbf{F}) \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}}\right]}{d \mathbf{C} \otimes d \mathbf{C}} \right] \end{align}
with
\[ \frac{d^{2} f_{\mu_{e}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} = -2 \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \text{sech}^{2} \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \left[ \frac{4} {\left(h_{e}^{\text{sat}}\right)^{2}} \mathbf{I} \right] \]
\[ \frac{d \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]}{d \boldsymbol{\mathbb{H}}} = 2 \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \]
\[ \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}}\right]}{d \mathbf{C} \otimes d \boldsymbol{\mathbb{H}}} \Rightarrow \frac{d^{2} \left[ \mathbb{H}_{e} C^{-1}_{ef} \mathbb{H}_{f} \right]}{d C_{ab} d \mathbb{H}_{c}} = - C^{-1}_{ac} C^{-1}_{be} \mathbb{H}_{e} - C^{-1}_{ae} \mathbb{H}_{e} C^{-1}_{bc} \]
\begin{align} \frac{d^{2} \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}}\right]}{d \mathbf{C} \otimes d \mathbf{C}} &= -\frac{d \left[\left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \otimes \left[ \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right]\right]}{d \mathbf{C}} \\ \Rightarrow \frac{d^{2} \left[ \mathbb{H}_{e} C^{-1}_{ef} \mathbb{H}_{f} \right]}{d C_{ab} d C_{cd}} &= \text{sym} \left( C^{-1}_{ae} \mathbb{H}_{e} C^{-1}_{cf} \mathbb{H}_{f} C^{-1}_{bd} + C^{-1}_{ce} \mathbb{H}_{e} C^{-1}_{bf} \mathbb{H}_{f} C^{-1}_{ad} \right) \\ &= \frac{1}{2} \left[ C^{-1}_{ae} \mathbb{H}_{e} C^{-1}_{cf} \mathbb{H}_{f} C^{-1}_{bd} + C^{-1}_{ae} \mathbb{H}_{e} C^{-1}_{df} \mathbb{H}_{f} C^{-1}_{bc} + C^{-1}_{ce} \mathbb{H}_{e} C^{-1}_{bf} \mathbb{H}_{f} C^{-1}_{ad} + C^{-1}_{be} \mathbb{H}_{e} C^{-1}_{df} \mathbb{H}_{f} C^{-1}_{ac} \right] \end{align}
Well, that escalated quickly – although the definition of \(\psi_{0}\) and \(f_{\mu_e}\) might have given some hints that the calculating the kinetic fields and their linearization would take some effort, it is likely that there's a little more complexity to the final definitions that perhaps initially thought. Knowing what we now do, it's probably fair to say that we really do not want to compute first and second derivatives of these functions with respect to their arguments – regardless of well we did in calculus classes, or how good a programmer we may be.
In the class method definition where these are ultimately implemented, we've composed these calculations slightly differently. Some intermediate steps are also retained to give another perspective of how to systematically compute the derivatives. Additionally, some calculations are decomposed less or further to reuse some of the intermediate values and, hopefully, aid the reader to follow the derivative operations.
For this class's update method, we'll simply precompute a collection of intermediate values (for function evaluations, derivative calculations, and the like) and "manually" arrange them in the order that's required to maximize their reuse. This means that we have to manage this ourselves, and decide what values must be compute before others, all while keeping some semblance of order or structure in the code itself. It's effective, but perhaps a little tedious. It also doesn't do too much to help future extension of the class, because all of these values remain local to this single method.
Interestingly, this basic technique of precomputing intermediate expressions that are used in more than one place has a name: common subexpression elimination (CSE). It is a strategy used by Computer Algebra Systems to reduce the computational expense when they are tasked with evaluating similar expressions.
The saturation function for the magneto-elastic energy.
The first derivative of the saturation function, noting that \(\frac{d \tanh(x)}{dx} = \text{sech}^{2}(x)\).
The second derivative of saturation function, noting that \(\frac{d \text{sech}^{2}(x)}{dx} = -2 \tanh(x) \text{sech}^{2}(x)\).
Some intermediate quantities attained directly from the field / kinematic variables.
First derivatives of the intermediate quantities.
Second derivatives of the intermediate quantities.
The stored energy density function.
The kinetic quantities.
The linearization of the kinetic quantities.
As mentioned before, the free energy density function for the magneto-viscoelastic material with one dissipative mechanism that we'll be considering is defined as
\[ \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) \]
\[ \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) = \frac{1}{2} \mu_{e} f_{\mu_{e}^{ME}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \text{tr}(\mathbf{C}) - d - 2 \ln (\text{det}(\mathbf{F})) \right] + \lambda_{e} \ln^{2} \left(\text{det}(\mathbf{F}) \right) - \frac{1}{2} \mu_{0} \mu_{r} \text{det}(\mathbf{F}) \left[ \boldsymbol{\mathbb{H}} \cdot \mathbf{C}^{-1} \cdot \boldsymbol{\mathbb{H}} \right] \]
\[ \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = \frac{1}{2} \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \mathbf{C}_{v} : \left[ \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right] - d - \ln\left( \text{det}\left(\mathbf{C}_{v}\right) \right) \right] \]
with
\[ f_{\mu_{e}}^{ME} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{e}^{\infty}}{\mu_{e}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{e}^{\text{sat}}\right)^{2}} \right) \]
\[ f_{\mu_{v}}^{MVE} \left( \boldsymbol{\mathbb{H}} \right) = 1 + \left[ \frac{\mu_{v}^{\infty}}{\mu_{v}} - 1 \right] \tanh \left( 2 \frac{\boldsymbol{\mathbb{H}} \cdot \boldsymbol{\mathbb{H}}} {\left(h_{v}^{\text{sat}}\right)^{2}} \right) \]
and the evolution law
\[ \dot{\mathbf{C}}_{v} \left( \mathbf{C} \right) = \frac{1}{\tau} \left[ \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}\right]^{-1} - \mathbf{C}_{v} \right] \]
that itself is parameterized in terms of \(\mathbf{C}\). By design, the magnetoelastic part of the energy \(\psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)\) is identical to that of the magnetoelastic material presented earlier. So, for the derivatives of the various contributions stemming from this part of the energy, please refer to the previous section. We'll continue to highlight the specific contributions from those terms by superscripting the salient terms with \(ME\), while contributions from the magneto-viscoelastic component are superscripted with \(MVE\). Furthermore, the magnetic saturation function \(f_{\mu_{v}}^{MVE} \left( \boldsymbol{\mathbb{H}} \right)\) for the damping term has the identical form as that of the elastic term (i.e., \(f_{\mu_{e}}^{ME} \left( \boldsymbol{\mathbb{H}} \right)\) ), and so the structure of its derivatives are identical to that seen before; the only change is for the three constitutive parameters that are now associated with the viscous shear modulus \(\mu_{v}\) rather than the elastic shear modulus \(\mu_{e}\).
For this magneto-viscoelastic material, the first derivatives that correspond to the magnetic induction vector and total Piola-Kirchhoff stress tensor are
\[ \boldsymbol{\mathbb{B}} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq - \frac{\partial \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} \Big\vert_{\mathbf{C}, \mathbf{C}_{v}} \equiv \boldsymbol{\mathbb{B}}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \boldsymbol{\mathbb{B}}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \frac{d \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} - \frac{\partial \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} \]
\[ \mathbf{S}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) \dealcoloneq 2 \frac{\partial \psi_{0} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C}} \Big\vert_{\mathbf{C}_{v}, \boldsymbol{\mathbb{H}}} \equiv \mathbf{S}^{\text{tot}, ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \mathbf{S}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = 2 \frac{d \psi_{0}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right)}{d \mathbf{C}} + 2 \frac{\partial \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C}} \]
with the viscous contributions being
\[ \boldsymbol{\mathbb{B}}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \frac{\partial \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} \Big\vert_{\mathbf{C}, \mathbf{C}_{v}} = - \frac{1}{2} \mu_{v} \left[ \mathbf{C}_{v} : \left[ \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right] - d - \ln\left( \text{det}\left(\mathbf{C}_{v}\right) \right) \right] \frac{\partial f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} \]
\[ \mathbf{S}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = 2 \frac{\partial \psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right)}{\partial \mathbf{C}} \Big\vert_{\mathbf{C}_{v}, \boldsymbol{\mathbb{H}}} = \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \left[ \mathbf{C}_{v} : \mathbf{C} \right] \left[ - \frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}^{-1} \right] + \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}_{v} \right] \]
and with
\[ \frac{\partial f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}}} \equiv \frac{d f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} . \]
The time-discretized evolution law,
\[ \mathbf{C}_{v}^{(t)} \left( \mathbf{C} \right) = \frac{1}{1 + \frac{\Delta t}{\tau_{v}}} \left[ \mathbf{C}_{v}^{(t-1)} + \frac{\Delta t}{\tau_{v}} \left[\left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right]^{-1} \right] \]
will also dictate how the linearization of the internal variable with respect to the field variables is composed.
Observe that in order to attain the correct expressions for the magnetic induction vector and total Piola-Kirchhoff stress tensor for this dissipative material, we must adhere strictly to the outcome of applying the Coleman-Noll procedure: we must take partial derivatives of the free energy density function with respect to the field variables. (For our non-dissipative magnetoelastic material, taking either partial or total derivatives would have had the same result, so there was no need to draw your attention to this before.) The crucial part of the operation is to freeze the internal variable \(\mathbf{C}_{v}^{(t)} \left( \mathbf{C} \right)\) while computing the derivatives of \(\psi_{0}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v} \left( \mathbf{C} \right), \boldsymbol{\mathbb{H}} \right)\) with respect to \(\mathbf{C}\) – the dependence of \(\mathbf{C}_{v}^{(t)}\) on \(\mathbf{C}\) is not to be taken into account. When deciding whether to use AD or SD to perform this task the choice is clear – only the symbolic framework provides a mechanism to do this; as was mentioned before, AD can only return total derivatives so it is unsuitable for the task.
To wrap things up, we'll present the material tangents for this rate-dependent coupled material. The linearization of both kinetic variables with respect to their arguments are
\[ \mathbb{D} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = \frac{d \boldsymbol{\mathbb{B}}}{d \boldsymbol{\mathbb{H}}} \equiv \mathbb{D}^{ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \mathbb{D}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = \frac{d \boldsymbol{\mathbb{B}}^{ME}}{d \boldsymbol{\mathbb{H}}} + \frac{d \boldsymbol{\mathbb{B}}^{MVE}}{d \boldsymbol{\mathbb{H}}} \]
\[ \mathfrak{P}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \frac{d \mathbf{S}^{\text{tot}}}{d \boldsymbol{\mathbb{H}}} \equiv \mathfrak{P}^{\text{tot}, ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \mathfrak{P}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \frac{d \mathbf{S}^{\text{tot}, ME}}{d \boldsymbol{\mathbb{H}}} - \frac{d \mathbf{S}^{\text{tot}, MVE}}{d \boldsymbol{\mathbb{H}}} \]
\[ \mathcal{H}^{\text{tot}} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = 2 \frac{d \mathbf{S}^{\text{tot}}}{d \mathbf{C}} \equiv \mathcal{H}^{\text{tot}, ME} \left( \mathbf{C}, \boldsymbol{\mathbb{H}} \right) + \mathcal{H}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = 2 \frac{d \mathbf{S}^{\text{tot}, ME}}{d \mathbf{C}} + 2 \frac{d \mathbf{S}^{\text{tot}, MVE}}{d \mathbf{C}} \]
where the tangents for the viscous contributions are
\[ \mathbb{D}^{MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \frac{1}{2} \mu_{v} \left[ \mathbf{C}_{v} : \left[ \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C} \right] - d - \ln\left( \text{det}\left(\mathbf{C}_{v}\right) \right) \right] \frac{\partial^{2} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} \]
\[ \mathfrak{P}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) = - \mu_{v} \left[ \left[ \mathbf{C}_{v} : \mathbf{C} \right] \left[ - \frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}^{-1} \right] + \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}_{v} \right] \otimes \frac{d f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}}} \]
\begin{align} \mathcal{H}^{\text{tot}, MVE} \left( \mathbf{C}, \mathbf{C}_{v}, \boldsymbol{\mathbb{H}} \right) &= 2 \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ - \frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}^{-1} \right] \otimes \left[ \mathbf{C}_{v} + \mathbf{C} : \frac{d \mathbf{C}_{v}}{d \mathbf{C}} \right] \\ &+ 2 \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ \mathbf{C}_{v} : \mathbf{C} \right] \left[ \frac{1}{d^{2}} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}^{-1} \otimes \mathbf{C}^{-1} - \frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] \\ &+ 2 \mu_{v} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right) \left[ -\frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \mathbf{C}_{v} \otimes \mathbf{C}^{-1} + \left[\text{det}\left(\mathbf{F}\right)\right]^{-\frac{2}{d}} \frac{d \mathbf{C}_{v}}{d \mathbf{C}} \right] \end{align}
with
\[ \frac{\partial^{2} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{\partial \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} \equiv \frac{d^{2} f_{\mu_{v}^{MVE}} \left( \boldsymbol{\mathbb{H}} \right)}{d \boldsymbol{\mathbb{H}} \otimes d \boldsymbol{\mathbb{H}}} \]
and, from the evolution law,
\[ \frac{d \mathbf{C}_{v}}{d \mathbf{C}} \equiv \frac{d \mathbf{C}_{v}^{(t)}}{d \mathbf{C}} = \frac{\frac{\Delta t}{\tau_{v}} }{1 + \frac{\Delta t}{\tau_{v}}} \left[ \frac{1}{d} \left[\text{det}\left(\mathbf{F}\right)\right]^{\frac{2}{d}} \mathbf{C}^{-1} \otimes \mathbf{C}^{-1} + \left[\text{det}\left(\mathbf{F}\right)\right]^{\frac{2}{d}} \frac{d \mathbf{C}^{-1}}{d \mathbf{C}} \right] . \]
Notice that just the last term of \(\mathcal{H}^{\text{tot}, MVE}\) contains the tangent of the internal variable. The linearization of this particular evolution law is linear. For an example of a nonlinear evolution law, for which this linearization must be solved for in an iterative manner, see [111].
A data structure that is used to store all intermediate calculations. We'll see shortly precisely how this can be leveraged to make the part of the code where we actually perform calculations clean and easy (well, at least easier) to follow and maintain. But for now, we can say that it will allow us to move the parts of the code where we compute the derivatives of intermediate quantities away from where they are used.
The next two functions are used to update the state of the field and internal variables, and will be called before we perform any detailed calculations.
The remainder of the class interface is dedicated to methods that are used to compute the components required to calculate the free energy density function, and all of its derivatives:
The kinematic, or field, variables.
A generalized formulation for the saturation function, with the required constitutive parameters passed as arguments to each function.
A generalized formulation for the first derivative of saturation function, with the required constitutive parameters passed as arguments to each function.
A generalized formulation for the second derivative of saturation function, with the required constitutive parameters passed as arguments to each function.
Intermediate quantities attained directly from the field / kinematic variables.
First derivatives of the intermediate quantities.
Derivative of internal variable with respect to field variables. Notice that we only need this one derivative of the internal variable, as this variable is only differentiated as part of the linearization of the kinetic variables.
Second derivatives of the intermediate quantities.
Record the applied deformation state as well as the magnetic load. Thereafter, update internal (viscous) variable based on new deformation state.
Get the values for the elastic and viscous saturation function based on the current magnetic field...
... as well as their first derivatives...
... and their second derivatives.
Intermediate quantities. Note that, since we're fetching these values from a cache that has a lifetime that outlasts this function call, we can alias the result rather than copying the value from the cache.
First derivatives of intermediate values, as well as the that of the internal variable with respect to the right Cauchy-Green deformation tensor.
Second derivatives of intermediate values.
Since the definitions of the linearizations become particularly lengthy, we'll decompose the free energy density function into three additive components:
To remain consistent, each of these contributions will be individually added to the variables that we want to compute in that same order.
So, first of all this is the energy density function itself:
... followed by the magnetic induction vector and Piola-Kirchhoff stress:
... and lastly the tangents due to the linearization of the kinetic variables.
Now that we're done using all of those temporary variables stored in our cache, we can clear it out to free up some memory.
The next few functions implement the generalized formulation for the saturation function, as well as its various derivatives.
A scaling function that will cause the shear modulus to change (increase) under the influence of a magnetic field.
First derivative of scaling function
For the cached calculation approach that we've adopted for this material class, the root of all calculations are the field variables, and the immutable ancillary data such as the constitutive parameters and time step size. As such, we need to enter them into the cache in a different manner to the other variables, since they are inputs that are prescribed from outside the class itself. This function simply adds them to the cache directly from the input arguments, checking that there is no equivalent data there in the first place (we expect to call the update_internal_data()
method only once per time step, or Newton iteration).
Set value for \(\boldsymbol{\mathbb{H}}\).
Set value for \(\mathbf{C}\).
After that, we can fetch them from the cache at any point in time.
With the primary variables guaranteed to be in the cache when we need them, we can not compute all intermediate values (either directly, or indirectly) from them.
If the cache does not already store the value that we're looking for, then we quickly calculate it, store it in the cache and return the value just stored in the cache. That way we can return it as a reference and avoid copying the object. The same goes for any values that a compound function might depend on. Said another way, if there is a dependency chain of calculations that come before the one that we're currently interested in doing, then we're guaranteed to resolve the dependencies before we proceed with using any of those values. Although there is a cost to fetching data from the cache, the "resolved dependency" concept might be sufficiently convenient to make it worth looking past the extra cost. If these material laws are embedded within a finite element framework, then the added cost might not even be noticeable.
The RheologicalExperimentParameters
class is used to drive the numerical experiments that are to be conducted on the coupled materials that we've implemented constitutive laws for.
These are dimensions of the rheological specimen that is to be simulated. They, effectively, define the measurement point for our virtual experiment.
The three steady-state loading parameters are respectively
Moreover, the parameters for the time-dependent rheological loading conditions are
We also declare some self-explanatory parameters related to output data generated for the experiments conducted with rate-dependent and rate-independent materials.
The next few functions compute time-related parameters for the experiment...
... while the following two prescribe the mechanical and magnetic loading at any given time...
... and this last one outputs the status of the experiment to the console.
The applied magnetic field is always aligned with the axis of rotation of the rheometer's rotor.
The applied deformation (gradient) is computed based on the geometry of the rheometer and the sample, the sampling point, and the experimental parameters. From the displacement profile documented in the introduction, the deformation gradient may be expressed in Cartesian coordinates as
\[ \mathbf{F} = \begin{bmatrix} \frac{\cos\left(\alpha\right)}{\sqrt{\lambda_{3}}} & -\frac{\sin\left(\alpha\right)}{\sqrt{\lambda_{3}}} & -\tau R \sqrt{\lambda_{3}} \sin\left(\Theta + \alpha\right) \\ \frac{\sin\left(\alpha\right)}{\sqrt{\lambda_{3}}} & \frac{\cos\left(\alpha\right)}{\sqrt{\lambda_{3}}} & -\tau R \sqrt{\lambda_{3}} \cos\left(\Theta + \alpha\right) \\ 0 & 0 & \lambda_{3} \end{bmatrix} \]
This is the function that will drive the numerical experiments.
We can take the hand-implemented constitutive law and compare the results that we attain with it to those that we get using AD or SD. In this way, we can verify that they produce identical results (which indicates that either both implementations have a high probability of being correct, or that they're incorrect with identical flaws being present in both). Either way, it is a decent sanity check for the fully self-implemented variants and can certainly be used as a debugging strategy when differences between the results are detected).
We'll be outputting the constitutive response of the material to file for post-processing, so here we declare a stream
that will act as a buffer for this output. We'll use a simple CSV format for the outputted results.
Using the DiscreteTime class, we iterate through each timestep using a fixed time step size.
We fetch and compute the loading to be applied to the material at this time step...
... then we update the state of the materials...
... and test for discrepancies between the two.
The next thing that we will do is collect some results to post-process. All quantities are in the "current configuration" (rather than the "reference configuration", in which all quantities computed by the constitutive laws are framed).
Finally, we output the strain-stress and magnetic loading history to file.
The purpose of this driver function is to read in all of the parameters from file and, based off of that, create a representative instance of each constitutive law and invoke the function that conducts a rheological experiment with it.
We start the actual work by configuring and running the experiment using our rate-independent constitutive law. The automatically differentiable number type is hard-coded here, but with some clever templating it is possible to select which framework to use at run time (e.g., as selected through the parameter file). We'll simultaneously perform the experiments with the counterpary material law that was fully implemented by hand, and check what it computes against our assisted implementation.
Next we do the same for the rate-dependent constitutive law. The highest performance option is selected as default if SymEngine is set up to use the LLVM just-in-time compiler which (in conjunction with some aggressive compilation flags) produces the fastest code evaluation path of all of the available option. As a fall-back, the so called "lambda" optimizer (which only requires a C++11 compliant compiler) will be selected. At the same time, we'll ask the CAS to perform common subexpression elimination to minimize the number of intermediate calculations used during evaluation. We'll record how long it takes to execute the "initialization" step inside the constructor for the SD implementation, as this is where the abovementioned transformations occur.
The main function only calls the driver functions for the two sets of examples that are to be executed.
The first exploratory example produces the following output. It is verified that all three implementations produce identical results.
To help summarize the results from the virtual experiment itself, below are some graphs showing the shear stress, plotted against the shear strain, at a select location within the material sample. The plots show the stress-strain curves under three different magnetic loads, and for the last cycle of the (mechanical) loading profile, when the rate-dependent material reaches a repeatable ("steady-state") response. These types of graphs are often referred to as Lissajous plots. The area of the ellipse that the curve takes for viscoelastic materials provides some measure of how much energy is dissipated by the material, and its ellipticity indicates the phase shift of the viscous response with respect to the elastic response.
Lissajous plot for the magneto-elastic material. |
Lissajous plot for the magneto-viscoelastic material. |
It is not surprising to see that the magneto-elastic material response has an unloading curve that matches the loading curve – the material is non-dissipative after all. But here it's clearly noticeable how the gradient of the curve increases as the applied magnetic field increases. The tangent at any point along this curve is related to the instantaneous shear modulus and, due to the way that the energy density function was defined, we expect that the shear modulus increases as the magnetic field strength increases. We observe much the same behavior for the magneto-viscoelastic material. The major axis of the ellipse traced by the loading-unloading curve has a slope that increases as a greater magnetic load is applied. At the same time, the more energy is dissipated by the material.
As for the code output, this is what is printed to the console for the part pertaining to the rheological experiment conducted with the magnetoelastic material:
And this portion of the output pertains to the experiment performed with the magneto-viscoelastic material:
The timer output is also emitted to the console, so we can compare time taken to perform the hand- and assisted- calculations and get some idea of the overhead of using the AD and SD frameworks. Here are the timings taken from the magnetoelastic experiment using the AD framework, based on the Sacado component of the Trilinos library:
With respect to the computations performed using automatic differentiation (as a reminder, this is with two levels of differentiation using the Sacado library in conjunction with dynamic forward auto-differentiable types), we observe that the assisted computations takes about \(65 \times\) longer to compute the desired quantities. This does seem like quite a lot of overhead but, as mentioned in the introduction, it's entirely subjective and circumstance-dependent as to whether or not this is acceptable or not: Do you value computer time more than human time for doing the necessary hand-computations of derivatives, verify their correctness, implement them, and verify the correctness of the implementation? If you develop a research code that will only be run for a relatively small number of experiments, you might value your own time more. If you develop a production code that will be run over and over on 10,000-core clusters for hours, your considerations might be different. In any case, the one nice feature of the AD approach is the "drop in" capability when functions and classes are templated on the scalar type. This means that minimal effort is required to start working with it.
In contrast, the timings for magneto-viscoelastic material as implemented using just-in-time (JIT) compiled symbolic algebra indicate that, at some non-negligible cost during initialization, the calculations themselves are a lot more efficiently executed:
Since the initialization phase need, most likely, only be executed once per thread, this initial expensive phase can be offset by the repeated use of a single Differentiation::SD::BatchOptimizer instance. Even though the magneto-viscoelastic constitutive law has more terms to calculate when compared to its magnetoelastic counterpart, it still is a whole order of magnitude faster to execute the computations of the kinetic variables and tangents. And when compared to the hand computed variant that uses the caching scheme, the calculation time is nearly equal. So although using the symbolic framework requires a paradigm shift in terms of how one implements and manipulates the symbolic expressions, it can offer good performance and flexibility that the AD frameworks lack.
On the point of data caching, the added cost of value caching for the magneto-viscoelastic material implementation is, in fact, about a \(6\times\) increase in the time spent in update_internal_data()
when compared to the implementation using intermediate values for the numerical experiments conducted with this material. Here's a sample output of the timing comparison extracted for the "hand calculated" variant when the caching data structure is removed:
With some minor adjustment we can quite easily test the different optimization schemes for the batch optimizer. So let's compare the computational expense associated with the LLVM
batch optimizer setting versus the alternatives. Below are the timings reported for the lambda
optimization method (retaining the use of CSE):
The primary observation here is that an order of magnitude greater time is spent in the "Assisted computation" section when compared to the LLVM
approach.
Last of all we'll test how dictionary
substitution, in conjunction with CSE, performs. Dictionary substitution simply does all of the evaluation within the native CAS framework itself, with no transformation of the underlying data structures taking place. Only the use of CSE, which caches intermediate results, will provide any "acceleration" in this instance. With that in mind, here are the results from this selection:
Needless to say, compared to the other two methods, these results took quite some time to produce... The dictionary
substitution method is perhaps only really viable for simple expressions or when the number of calls is sufficiently small.
Perhaps you've been convinced that these tools have some merit, and can be of immediate help or use to you. The obvious question now is which one to use. Focusing specifically at a continuum point level, where you would be using these frameworks to compute derivatives of a constitutive law in particular, we can say the following:
There are a few logical ways in which this program could be extended:
With less effort, one could think about re-writing nonlinear problem solvers such as the one implemented in step-15 using AD or SD approaches to compute the Newton matrix. Indeed, this is done in step-72.