deal.II version GIT relicensing-2165-gc91f007519 2024-11-20 01:40:00+00:00
|
#include <deal.II/sundials/ida.h>
Classes | |
class | AdditionalData |
Public Member Functions | |
IDA (const AdditionalData &data=AdditionalData()) | |
IDA (const AdditionalData &data, const MPI_Comm mpi_comm) | |
~IDA () | |
unsigned int | solve_dae (VectorType &solution, VectorType &solution_dot) |
void | reset (const double t, const double h, VectorType &y, VectorType &yp) |
Static Public Member Functions | |
static ::ExceptionBase & | ExcIDAError (int arg1) |
Public Attributes | |
std::function< void(VectorType &)> | reinit_vector |
std::function< void(const double t, const VectorType &y, const VectorType &y_dot, VectorType &res)> | residual |
std::function< void(const double t, const VectorType &y, const VectorType &y_dot, const double alpha)> | setup_jacobian |
std::function< void(const VectorType &rhs, VectorType &dst, const double tolerance)> | solve_with_jacobian |
std::function< void(const double t, const VectorType &sol, const VectorType &sol_dot, const unsigned int step_number)> | output_step |
std::function< bool(const double t, VectorType &sol, VectorType &sol_dot)> | solver_should_restart |
std::function< IndexSet()> | differential_components |
std::function< VectorType &()> | get_local_tolerances |
Private Member Functions | |
void | set_functions_to_trigger_an_assert () |
Static Private Member Functions | |
static ::ExceptionBase & | ExcFunctionNotProvided (std::string arg1) |
Private Attributes | |
const AdditionalData | data |
void * | ida_mem |
SUNContext | ida_ctx |
MPI_Comm | mpi_communicator |
GrowingVectorMemory< VectorType > | mem |
std::exception_ptr | pending_exception |
Interface to SUNDIALS Implicit Differential-Algebraic (IDA) solver.
The class IDA is a wrapper to SUNDIALS Implicit Differential-Algebraic solver which is a general purpose solver for systems of Differential-Algebraic Equations (DAEs). Another class that can solve this set of equations is PETScWrappers::TimeStepper.
The user has to provide the implementation of the following std::functions:
Optionally, also the following functions could be provided. By default they do nothing, or are not required. If you call the constructor in a way that requires a not-implemented function, an Assertion will be thrown.
To output steps, connect a function to the signal
Citing from the SUNDIALS documentation:
Consider a system of Differential-Algebraic Equations written in the general form
\[ \begin{cases} F(t,y,\dot y) = 0\, , \\ y(t_0) = y_0\, , \\ \dot y (t_0) = \dot y_0\, . \end{cases} \]
where \(y,\dot y\) are vectors in \(\mathbb{R}^n\), \(t\) is often the time (but can also be a parametric quantity), and \(F:\mathbb{R}\times\mathbb{R}^n\times \mathbb{R}^n\rightarrow\mathbb{R}^n\). Such problem is solved using Newton iteration augmented with a line search global strategy. The integration method used in IDA is the variable-order, variable-coefficient BDF (Backward Differentiation Formula), in fixed-leading-coefficient. The method order ranges from 1 to 5, with the BDF of order \(q\) given by the multistep formula
\[ \sum_{i=0}^q \alpha_{n,i}\,y_{n-i}=h_n\,\dot y_n\, , \label{eq:bdf} \]
where \(y_n\) and \(\dot y_n\) are the computed approximations of \(y(t_n)\) and \(\dot y(t_n)\), respectively, and the step size is \(h_n=t_n-t_{n-1}\). The coefficients \(\alpha_{n,i}\) are uniquely determined by the order \(q\), and the history of the step sizes. The application of the BDF method to the DAE system results in a nonlinear algebraic system to be solved at each time step:
\[ G(y_n)\equiv F\left(t_n,y_n,\dfrac{1}{h_n}\sum_{i=0}^q \alpha_{n,i}\,y_{n-i}\right)=0\, . \]
The Newton method leads to a linear system of the form
\[ J[y_{n(m+1)}-y_{n(m)}]=-G(y_{n(m)})\, , \]
where \(y_{n(m)}\) is the \(m\)-th approximation to \(y_n\), and \(J\) is the approximation of the system Jacobian
\[ J=\dfrac{\partial G}{\partial y} = \dfrac{\partial F}{\partial y} + \alpha \dfrac{\partial F}{\partial \dot y}\, , \]
and \(\alpha = \alpha_{n,0}/h_n\). It is worth mentioning that the scalar \(\alpha\) changes whenever the step size or method order changes.
To provide a simple example, consider the following harmonic oscillator problem:
\[ \begin{split} u'' & = -k^2 u \\ u (0) & = 0 \\ u'(0) & = k \end{split} \]
We write it in terms of a first order ode:
\[ \begin{matrix} y_0' & -y_1 & = 0 \\ y_1' & + k^2 y_0 & = 0 \end{matrix} \]
That is, \(F(y', y, t) = y' + A y = 0 \) where
\[ A = \begin{pmatrix} 0 & -1 \\ k^2 &0 \end{pmatrix} \]
and \(y(0)=(0, k)\), \(y'(0) = (k, 0)\).
The exact solution is \(y_0(t) = \sin(k t)\), \(y_1(t) = y_0'(t) = k \cos(k t)\), \(y_1'(t) = -k^2 \sin(k t)\).
The Jacobian to assemble is the following: \(J = \alpha I + A\).
This is achieved by the following snippet of code:
A more interesting example is a situation where the form \(F(y', y, t) = 0\) provides something genuinely more flexible than a typical ordinary differential equation. Specifically, consider the equation
\begin{align*} u'(t) &= av(t), \\ 0 &= v(t) - u(t). \end{align*}
One can combine the two variables into \(y(t) = [u(t), v(t)]^T\). Here, one of the two variables does not have a time derivative. In applications, this is often the case when one variable evolves in time (here, \(u(t)\)) on its own time scale, and the other one finds its value as a function of the former on a much faster time scale. In the current context, we could of course easily eliminate \(v(t)\) using the second equation, and would then just be left with the equation
\[ u'(t) = au(t) \]
which has solution \(u(t) = u(0)e^{at}\). But this is, in general, not easily possible if the two variables are related by differential operators. In fact, this happens quite frequently in application. Take, for example, the time-dependent Stokes equations:
\begin{align*} \frac{\partial \mathbf u(\mathbf x,t)}{\partial t} - \nu \Delta \mathbf u(\mathbf x,t) + \nabla p(\mathbf x,t) &= \mathbf f(\mathbf x,t), \\ \nabla \cdot \mathbf u(\mathbf x,t) &= 0. \end{align*}
Here, the fluid velocity \(\mathbf u(\mathbf x,t)\) evolves over time, and the pressure is always in equilibrium with the flow because the Stokes equations are derived under the assumption that the speed of sound (at which pressure perturbations propagate) is much larger than the fluid velocity. As a consequence, there is no time derivative on the pressure available in the equation, but unlike the simple model problem above, the pressure can not easily be eliminated from the system. Similar situations happen in step-21, step-31, step-32, step-43, and others, where a subset of variables is always in instantaneous equilibrium with another set of variables that evolves on a slower time scale.
Another case where we could eliminate a variable but do not want to is where that additional variable is introduced in the first place to work around some other problem. As an example, consider the time dependent version of the biharmonic problem we consider in step-47 (as well as some later ones). The equations we would then be interested in would read
\begin{align*} \frac{\partial u(\mathbf x,t)}{\partial t} + \Delta^2 u(\mathbf x,t) &= f(\mathbf x,t). \end{align*}
As discussed in step-47, the difficulty is the presence of the fourth derivatives. One way in which one can address this is by introducing an auxiliary variable \(v=\Delta u\) which would render the problem into the following one that only ever has second derivatives which we know how to deal with:
\begin{align*} \frac{\partial u(\mathbf x,t)}{\partial t} + \Delta v(\mathbf x,t) &= f(\mathbf x,t), \\ v(\mathbf x,t)-\Delta u(\mathbf x,t) &= 0. \end{align*}
Here, the introduction of the additional variable was voluntary, and could be undone, but we don't want that of course. Rather, we end up with a differential-algebraic equation because the equations do not have a time derivative for \(v\).
Rather than show how to solve the trivial (linear) case above, let us instead consider the situation where we introduce another variable \(v\) that is related to \(u\) by the nonlinear relationship \(v=u^p\), \(p\ge 1\):
\begin{align*} u'(t) &= a v(t)^{1/p}, \\ 0 &= v(t) - u(t)^p. \end{align*}
We will impose initial conditions as
\begin{align*} u(0) &= 1 \\ v(0) &= 1. \end{align*}
The problem continues to have the solution \(u(t)=e^{at}\) with the auxiliary variable satisfying \(v(t)=[e^{at}]^p\). One would implement all of this using the following little program where you have to recall that
\[ F = \begin{pmatrix}u' -a v^{1/p} \\ -u^p + v \end{pmatrix} \]
and that the Jacobian we need to provide is
\[ J(\alpha) = = \dfrac{\partial F}{\partial y} + \alpha \dfrac{\partial F}{\partial \dot y} = \begin{pmatrix} \alpha && -av^{1/p-1}/p \\ -pu^{p-1} & 1 \end{pmatrix} \]
All of this can be implemented using the following code:
Note that in this code, we not only provide initial conditions for \(u\) and \(v\), but also for \(u'\) and \(v'\). We can do this here because we know what the exact solution is.
Whereas in the previous section, we were able to provide not only initial values in the form of a vector for \(y(0)\), but also for \(y'(0)\), this is not a common situation. For example, for the Stokes equations mentioned above,
\begin{align*} \frac{\partial \mathbf u(\mathbf x,t)}{\partial t} - \nu \Delta \mathbf u(\mathbf x,t) + \nabla p(\mathbf x,t) &= \mathbf f(\mathbf x,t), \\ \nabla \cdot \mathbf u(\mathbf x,t) &= 0, \end{align*}
one generally might have an initial velocity field for \(\mathbf u(\mathbf x,0)\), but typically one does not have an initial pressure field \(p(\mathbf x,0)\) nor either of these variables' time derivatives at \(t=0\).
Fortunately, they can typically be computed via the relationship \(F(t,y,\dot y) = 0\). To illustrate how this can is done, let us re-use the nonlinear example from the previous section:
\begin{align*} u'(t) &= a v(t)^{1/p}, \\ 0 &= v(t) - u(t)^p. \end{align*}
If we now impose initial conditions for both variables, for example
\begin{align*} u(0) &= 1 \\ v(0) &= 1, \end{align*}
then the only change necessary is to create the time stepper via
and then we can run the program with the following at the end:
Here, IDA first compute \(\dot y(0)\) before starting the time stepping process.
In many applications, however, one does not even have a complete set of initial conditions – e.g., in the Stokes equations above, one generally only has initial values for the velocity, but not the pressure. IDA can also compute these, but for that it needs to know which components of the solution vector have differential equations attached to them – i.e., for which components a time derivative appears in \(F(t,y,\dot y)\). This is not difficult to do – we only have to add the following block where a lambda function returns an IndexSet that describes which variables are "differential" (included in the index set) and which are not (not included in the index set):
With these modifications, IDA correctly computes the solutions \(u(t)\) and \(v(t)\).
A word of caution, however: All of this solving for components of \(y(0)\) and \(y'(t)\) costs time and accuracy. If you can provide initial conditions, you should; if you can't, they have to be numerically approximated and will be close but not exact. In the examples above, if all initial conditions \(y(0),\dot y(0)\) are provided, IDA computes the solution \(y(10)=e^{10}\approx 22,000\) to an absolute accuracy of around \(3\cdot 10^{-5}\) (i.e., to a relative tolerance of better than \(10^{-8}\)). If you only provide \(y(0)\) correctly, the absolute error is about twice as large, around \(6\cdot 10^{-5}\). If one also omits providing the initial value for the second component of \(y(0)\) (the non-differential component \(v(0)\)), the error goes up to \(5\cdot 10^{-4}\). That's not bad, but the trend is clear. In practice, one can control the accuracy of the required solves for initial conditions by setting the appropriate flags in the AdditionalData object passed to the constructor.
SUNDIALS::IDA< VectorType >::IDA | ( | const AdditionalData & | data = AdditionalData() | ) |
Constructor. It is possible to fine tune the SUNDIALS IDA solver by passing an AdditionalData() object that sets all of the solver parameters.
IDA is a Differential Algebraic solver. As such, it requires initial conditions also for the first order derivatives. If you do not provide consistent initial conditions, (i.e., conditions for which F(y_dot(0), y(0), 0) = 0), you can ask SUNDIALS to compute initial conditions for you by using the ic_type
parameter at construction time.
You have three options
By default, this class assumes that all components are differential, and that you want to solve a standard ode. In this case, the initial component type is set to use_y_diff
, so that the y_dot
at time t=initial_time
is computed by solving the nonlinear problem \(F(y_dot,
y(t0), t0) = 0\) in the variable y_dot
.
Notice that a Newton solver is used for this computation. The Newton solver parameters can be tweaked by acting on ic_alpha
and ic_max_iter
.
If you reset the solver at some point, you may want to select a different computation for the initial conditions after reset. Say, for example, that you have refined a grid, and after transferring the solution to the new grid, the initial conditions are no longer consistent. Then you can choose how these are made consistent, using the same three options that you used for the initial conditions in reset_type
.
data | IDA configuration data |
SUNDIALS::IDA< VectorType >::IDA | ( | const AdditionalData & | data, |
const MPI_Comm | mpi_comm | ||
) |
SUNDIALS::IDA< VectorType >::~IDA | ( | ) |
unsigned int SUNDIALS::IDA< VectorType >::solve_dae | ( | VectorType & | solution, |
VectorType & | solution_dot | ||
) |
void SUNDIALS::IDA< VectorType >::reset | ( | const double | t, |
const double | h, | ||
VectorType & | y, | ||
VectorType & | yp | ||
) |
Clear internal memory and start with clean objects. This function is called when the simulation start and when the user returns true to a call to solver_should_restart().
By default solver_should_restart() returns false. If the user needs to implement, for example, local adaptivity in space, he or she may assign a different function to solver_should_restart() that performs all mesh changes, transfers the solution and the solution dot to the new mesh, and returns true.
During reset(), both y and yp are checked for consistency, and according to what was specified as ic_type (if t==initial_time) or reset_type (if t>initial_time), yp, y, or both are modified to obtain a consistent set of initial data.
[in] | t | The new starting time |
[in] | h | The new (tentative) starting time step |
[in,out] | y | The new (tentative) initial solution |
[in,out] | yp | The new (tentative) initial solution_dot |
|
private |
std::function<void(VectorType &)> SUNDIALS::IDA< VectorType >::reinit_vector |
std::function<void(const double t, const VectorType &y, const VectorType &y_dot, VectorType &res)> SUNDIALS::IDA< VectorType >::residual |
Compute residual. Return \(F(t, y, \dot y)\).
std::function<void(const double t, const VectorType &y, const VectorType &y_dot, const double alpha)> SUNDIALS::IDA< VectorType >::setup_jacobian |
Compute Jacobian. This function is called by IDA any time a Jacobian update is required. The user should compute the Jacobian (or update all the variables that allow the application of the Jacobian). This function is called by IDA once, before any call to solve_with_jacobian().
The Jacobian \(J\) should be a (possibly inexact) computation of
\[ J=\dfrac{\partial G}{\partial y} = \dfrac{\partial F}{\partial y} + \alpha \dfrac{\partial F}{\partial \dot y}. \]
If the user uses a matrix based computation of the Jacobian, then this is the right place where an assembly routine should be called to assemble both a matrix and a preconditioner for the Jacobian system. Subsequent calls (possibly more than one) to solve_with_jacobian() can assume that this function has been called at least once.
Notice that no assumption is made by this interface on what the user should do in this function. IDA only assumes that after a call to setup_jacobian() it is possible to call solve_with_jacobian() to obtain a solution \(x\) to the system \(J x = b\).
std::function< void(const VectorType &rhs, VectorType &dst, const double tolerance)> SUNDIALS::IDA< VectorType >::solve_with_jacobian |
Solve the Jacobian linear system up to a specified tolerance. This function will be called by IDA (possibly several times) after setup_jacobian() has been called at least once. IDA tries to do its best to call setup_jacobian() the minimum number of times. If convergence can be achieved without updating the Jacobian, then IDA does not call setup_jacobian() again. If, on the contrary, internal IDA convergence tests fail, then IDA calls again setup_jacobian() with updated vectors and coefficients so that successive calls to solve_with_jacobian() lead to better convergence in the Newton process.
The Jacobian \(J\) should be (an approximation of) the system Jacobian
\[ J=\dfrac{\partial G}{\partial y} = \dfrac{\partial F}{\partial y} + \alpha \dfrac{\partial F}{\partial \dot y}. \]
Arguments to the function are:
[in] | rhs | The system right hand side to solve for. |
[out] | dst | The solution of \(J^{-1} * src\). |
[in] | tolerance | The tolerance with which to solve the linear system of equations. |
A call to this function should store in dst
the result of \(J^{-1}\) applied to src
, i.e., the solution of the linear system J*dst = src
. It is the user's responsibility to set up proper solvers and preconditioners either inside this function, or already within the setup_jacobian()
function. (The latter is, for example, what the step-77 program does: All expensive operations happen in setup_jacobian()
, given that that function is called far less often than the current one.)
std::function<void(const double t, const VectorType &sol, const VectorType &sol_dot, const unsigned int step_number)> SUNDIALS::IDA< VectorType >::output_step |
Process solution. This function is called by IDA at fixed time steps, every output_period
seconds, and it is passed a polynomial interpolation of the solution and of its time derivative, computed using the current BDF order and the (internally stored) previously computed solution steps.
Notice that it is well possible that internally IDA computes a time step which is much larger than the output_period
step, and therefore calls this function consecutively several times by simply performing all intermediate interpolations. There is no relationship between how many times this function is called and how many time steps have actually been computed.
std::function<bool(const double t, VectorType &sol, VectorType &sol_dot)> SUNDIALS::IDA< VectorType >::solver_should_restart |
Evaluate whether the solver should be restarted (for example because the number of degrees of freedom has changed).
This function is supposed to perform all operations that are necessary in sol
and sol_dot
to make sure that the resulting vectors are consistent, and of the correct final size.
For example, one may decide that a local refinement is necessary at time t. This function should then return true, and change the dimension of both sol and sol_dot to reflect the new dimension. Since IDA does not know about the new dimension, an internal reset is necessary.
The default implementation simply returns false
, i.e., no restart is performed during the evolution.
std::function<IndexSet()> SUNDIALS::IDA< VectorType >::differential_components |
Return an index set containing the differential components. Implementation of this function is optional. The default is to return a complete index set. If your equation is also algebraic (i.e., it contains algebraic constraints, or Lagrange multipliers), you should overwrite this function in order to return only the differential components of your system.
When running in parallel, every process will call this function independently, and synchronization will happen at the end of the initialization setup to communicate what components are local. Make sure you only return the locally owned (or locally relevant) components, in order to minimize communication between processes.
std::function<VectorType &()> SUNDIALS::IDA< VectorType >::get_local_tolerances |
|
private |
|
private |
|
private |
|
private |
|
private |
|
mutableprivate |