Reference documentation for deal.II version 9.6.0
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The 'Laplace equation coupled to an external simulation program' code gallery program

This program was contributed by David Schneider <dav.schneider@tum.de>.
It comes without any warranty or support by its authors or the authors of deal.II.

This program is part of the deal.II code gallery and consists of the following files (click to inspect):

Pictures from this code gallery program

Annotated version of README.md

Laplace equation coupled to an external simulation program

Overview

preCICE allows to couple deal.II to external simulation software, such as OpenFOAM, SU2, or CalculiX. To keep dependencies of this example minimal, we couple deal.II to an external C++ program, which provides a time varying boundary condition. The deal.II code consists mainly of the step-4 tutorial program, where a simple Laplace problem is solved.

Coupling with preCICE is usually carried out along surfaces in order to apply a Dirichlet-Neumann coupling between two domains (volume coupling is also possible). For the sake of simplicity, we couple here an external C++ program in a unidirectional fashion to one side of our quadrilateral domain. The external C++ program generates a parabolic boundary profile with time varying amplitude. The boundary values are then used in the Laplace solver as a Dirichlet boundary condition.

Time discretization

Coupled simulations deal mostly with time-dependent problems. Hence, we make the stationary Laplace problem from step-4 time dependent.

\begin{align*} \frac{\partial u}{\partial t}-\Delta u &= f \qquad\qquad & \text{in}\ \Omega, \\ u &= x^2+y^2 \qquad\qquad & \text{on}\ \partial\Omega_s, \\ u &= g(t) \qquad\qquad & \text{on}\ \partial\Omega_c. \end{align*}

with the fixed Dirichlet boundary \Omega_s, the coupling boundary \Omega_c and the time-dependent coupling data g(t).

The system is consequently discretized by a first-order backward Euler method, resulting in

\begin{align*} \frac{u^{n+1}-u^n}{\Delta t}-\Delta u^{n+1} &= f \qquad\qquad & \text{in}\ \Omega, \end{align*}

at the next time level n+1.

Requirements

deal.II, version 9.2 or greater. Older versions might work as well, but have not been tested.

preCICE, version 2.0 or greater. Have a look at the provided link for an installation guide.

Compiling and running

Similar to the example programs, run

cmake -DDEAL_II_DIR=/path/to/deal.II -Dprecice_DIR=/path/to/precice .

in this directory to configure the problem. The explicit specification of the library locations can be omitted if they are installed globally or via environment variables. You can then switch between debug and release mode by calling either

make debug

or

make release

This command will generate two executables: one for the coupled_laplace_problem and one for the fancy_boundary_condition participant.

./coupled_laplace_problem

executes the coupled_laplace_problem. In order to start the coupled simulation, execute

./fancy_boundary_condition

in the same directory from another terminal window.

preCICE and the deal.II solver create several log files during the simulation. In order to remove all result-associated files and clean-up the simulation directory the Allclean script can be executed.

Results

By default, the results are saved in each time step using vtk files, which can, for example, be visualized with ParaView. You should get something like: result

Further reading

A complete overview of the preCICE project can be found on the preCICE webpage. The source code of preCICE is hosted on Github. For more coupled deal.II codes, have a look at the deal.II adapter repository. In the preCICE tutorials, a fluid-structure interaction example is given coupling OpenFOAM (fluid dynamics) to deal.II (solid mechanics). The preCICE reference paper

Annotated version of coupled_laplace_problem.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2020 by David Schneider
*   * Copyright (C) 2020 by Benjamin Uekermann
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 

The included deal.II header files are the same as in the other example programs:

  #include <deal.II/base/function.h>
  #include <deal.II/base/logstream.h>
  #include <deal.II/base/quadrature_lib.h>
 
  #include <deal.II/dofs/dof_accessor.h>
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_tools.h>
 
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_values.h>
 
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/tria.h>
  #include <deal.II/grid/tria_accessor.h>
  #include <deal.II/grid/tria_iterator.h>
 
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/precondition.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/vector.h>
 
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/matrix_tools.h>
  #include <deal.II/numerics/vector_tools.h>

In addition to the deal.II header files, we include the preCICE API in order to obtain access to preCICE specific functionality

  #include <precice/precice.hpp>
 
  #include <fstream>
  #include <iostream>
 
  using namespace dealii;
 

Configuration parameters

We set up a simple hard-coded struct containing all names we need for external coupling. The struct includes the name of the preCICE configuration file as well as the name of the simulation participant, the name of the coupling mesh and the name of the exchanged data. The last three names you also find in the preCICE configuration file. For real application cases, these names are better handled by a parameter file.

  struct CouplingParamters
  {
  const std::string config_file = "precice-config.xml";
  const std::string participant_name = "laplace-solver";
  const std::string mesh_name = "dealii-mesh";
  const std::string read_data_name = "boundary-data";
  };
 
 

The Adapter class

The Adapter class handles all functionalities to couple the deal.II solver code to other solvers with preCICE, i.e., data structures are set up and all relevant information is passed to preCICE.

  template <int dim, typename ParameterClass>
  class Adapter
  {
  public:
  Adapter(const ParameterClass & parameters,
  const types::boundary_id dealii_boundary_interface_id);
 
  void
  initialize(const DoFHandler<dim> & dof_handler,
  std::map<types::global_dof_index, double> &boundary_data,
  const MappingQGeneric<dim> & mapping);
 
  void
  read_data(double relative_read_time,
  std::map<types::global_dof_index, double> &boundary_data);
 
  void
  advance(const double computed_timestep_length);
 
void advance(std::tuple< I1, I2 > &t, const unsigned int n)

public precCICE solver interface

  precice::Participant precice;
 

Boundary ID of the deal.II triangulation, associated with the coupling interface. The variable is defined in the constructor of this class and intentionally public so that it can be used during the grid generation and system assembly. The only thing, one needs to make sure is that this ID is unique for a particular triangulation.

  const unsigned int dealii_boundary_interface_id;
 
  private:

preCICE related initializations These variables are specified in and read from a parameter file, which is in this simple tutorial program the CouplingParameter struct already introduced in the beginning.

  const std::string mesh_name;
  const std::string read_data_name;
 

The node IDs are filled by preCICE during the initialization and associated to the spherical vertices we pass to preCICE

  int n_interface_nodes;
 

DoF IndexSet, containing relevant coupling DoF indices at the coupling boundary

  IndexSet coupling_dofs;
 

Data containers which are passed to preCICE in an appropriate preCICE specific format

  std::vector<int> interface_nodes_ids;
  std::vector<double> read_data_buffer;
 

The MPI rank and total number of MPI ranks is required by preCICE when the Participant is created. Since this tutorial runs only in serial mode we define the variables manually in this class instead of using the regular MPI interface.

  static constexpr int this_mpi_process = 0;
  static constexpr int n_mpi_processes = 1;
 

Function to transform the obtained data from preCICE into an appropriate map for Dirichlet boundary conditions

  void
  format_precice_to_dealii(
  std::map<types::global_dof_index, double> &boundary_data) const;
  };
 
 
 

In the constructor of the Adapter class, we set up the preCICE Participant. We need to tell preCICE our name as participant of the simulation and the name of the preCICE configuration file. Both have already been specified in the CouplingParameter class above. Thus, we pass the class directly to the constructor and read out all relevant information. As a second parameter, we need to specify the boundary ID of our triangulation, which is associated with the coupling interface.

  template <int dim, typename ParameterClass>
  Adapter<dim, ParameterClass>::Adapter(
  const ParameterClass & parameters,
  const types::boundary_id deal_boundary_interface_id)
  : precice(parameters.participant_name,
  parameters.config_file,
  this_mpi_process,
  n_mpi_processes)
  , dealii_boundary_interface_id(deal_boundary_interface_id)
  , mesh_name(parameters.mesh_name)
  , read_data_name(parameters.read_data_name)
  {}
 
 
 

This function initializes preCICE (e.g. establishes communication channels and allocates memory) and passes all relevant data to preCICE. For surface coupling, relevant data is in particular the location of the data points at the associated interface(s). The boundary_data is an empty map, which is filled by preCICE, i.e., information of the other participant. Throughout the system assembly, the map can directly be used in order to apply the Dirichlet boundary conditions in the linear system.

  template <int dim, typename ParameterClass>
  void
  Adapter<dim, ParameterClass>::initialize(
  const DoFHandler<dim> & dof_handler,
  std::map<types::global_dof_index, double> &boundary_data,
  const MappingQGeneric<dim> & mapping)
  {
  Assert(dim > 1, ExcNotImplemented());
  AssertDimension(dim, precice.getMeshDimensions(mesh_name));
 
#define Assert(cond, exc)
#define AssertDimension(dim1, dim2)

Afterwards, we extract the number of interface nodes and the coupling DoFs at the coupling interface from our deal.II solver via extract_boundary_dofs()

  std::set<types::boundary_id> couplingBoundary;
  couplingBoundary.insert(dealii_boundary_interface_id);
 

The ComponentMask() might be important in case we deal with vector valued problems, because vector valued problems have a DoF for each component.

  coupling_dofs = DoFTools::extract_boundary_dofs(dof_handler,
  couplingBoundary);
 
IndexSet extract_boundary_dofs(const DoFHandler< dim, spacedim > &dof_handler, const ComponentMask &component_mask={}, const std::set< types::boundary_id > &boundary_ids={})
Definition dof_tools.cc:614

The coupling DoFs are used to set up the boundary_data map. At the end, we associate here each DoF with a respective boundary value.

  for (const auto i : coupling_dofs)
  boundary_data[i] = 0.0;
 

Since we deal with a scalar problem, the number of DoFs at the particular interface corresponds to the number of interface nodes.

  n_interface_nodes = coupling_dofs.n_elements();
 
  std::cout << "\t Number of coupling nodes: " << n_interface_nodes
  << std::endl;
 

Now, we need to tell preCICE the coordinates of the interface nodes. Hence, we set up a std::vector to pass the node positions to preCICE. Each node is specified only once.

  std::vector<double> interface_nodes_positions;
  interface_nodes_positions.reserve(dim * n_interface_nodes);
 

Set up the appropriate size of the data container needed for data exchange. Here, we deal with a scalar problem, so that only a scalar value is read/written per interface node.

  read_data_buffer.resize(n_interface_nodes);

The IDs are filled by preCICE during the initializations.

  interface_nodes_ids.resize(n_interface_nodes);
 

The node location is obtained using map_dofs_to_support_points().

  std::map<types::global_dof_index, Point<dim>> support_points;
  DoFTools::map_dofs_to_support_points(mapping, dof_handler, support_points);
 
void map_dofs_to_support_points(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof_handler, std::vector< Point< spacedim > > &support_points, const ComponentMask &mask={})

support_points contains now the coordinates of all DoFs. In the next step, the relevant coordinates are extracted using the IndexSet with the extracted coupling_dofs.

  for (const auto element : coupling_dofs)
  for (int i = 0; i < dim; ++i)
  interface_nodes_positions.push_back(support_points[element][i]);
 

Now we have all information to define the coupling mesh and pass the information to preCICE.

  precice.setMeshVertices(mesh_name,
  interface_nodes_positions,
  interface_nodes_ids);
 

Then, we initialize preCICE internally calling the API function initialize()

  precice.initialize();
  }
 
  template <int dim, typename ParameterClass>
  void
  Adapter<dim, ParameterClass>::read_data(
  double relative_read_time,
  std::map<types::global_dof_index, double> &boundary_data)
  {

here, we obtain data, i.e. the boundary condition, from another participant. We have already vertex IDs and just need to convert our obtained data to the deal.II compatible 'boundary map' , which is done in the format_deal_to_precice function.

  precice.readData(mesh_name,
  read_data_name,
  interface_nodes_ids,
  relative_read_time,
  read_data_buffer);
 

After receiving the coupling data in read_data_buffer, we convert it to the std::map boundary_data which is later needed in order to apply Dirichlet boundary conditions

  format_precice_to_dealii(boundary_data);
  }
 
 

The function advance() is called in the main time loop after the computation in each time step. Here, preCICE exchanges the coupling data internally and computes mappings as well as acceleration methods.

  template <int dim, typename ParameterClass>
  void
  Adapter<dim, ParameterClass>::advance(const double computed_timestep_length)
  {

We specify the computed time-step length and pass it to preCICE.

  precice.advance(computed_timestep_length);
  }
 
 
 

This function takes the std::vector obtained by preCICE in read_data_buffer and inserts the values to the right position in the boundary map used throughout our deal.II solver for Dirichlet boundary conditions. The function is only used internally in the Adapter class and not called in the solver itself. The order, in which preCICE sorts the data in the read_data_buffer vector is exactly the same as the order of the initially passed vertices coordinates.

  template <int dim, typename ParameterClass>
  void
  Adapter<dim, ParameterClass>::format_precice_to_dealii(
  std::map<types::global_dof_index, double> &boundary_data) const
  {

We already stored the coupling DoF indices in the boundary_data map, so that we can simply iterate over all keys in the map.

  auto dof_component = boundary_data.begin();
  for (int i = 0; i < n_interface_nodes; ++i)
  {
  AssertIndexRange(i, read_data_buffer.size());
  boundary_data[dof_component->first] = read_data_buffer[i];
  ++dof_component;
  }
  }
 
 
#define AssertIndexRange(index, range)

The solver class is essentially the same as in step-4. We only extend the stationary problem to a time-dependent problem and introduced the coupling. Comments are added at any point, where the workflow differs from step-4.

  template <int dim>
  class CoupledLaplaceProblem
  {
  public:
  CoupledLaplaceProblem();
 
  void
  run();
 
  private:
  void
  make_grid();
  void
  setup_system();
  void
  assemble_system();
  void
  solve();
  void
  output_results() const;
 
  FE_Q<dim> fe;
  DoFHandler<dim> dof_handler;
  MappingQ1<dim> mapping;
 
  SparsityPattern sparsity_pattern;
  SparseMatrix<double> system_matrix;
 
  Vector<double> solution;
  Vector<double> old_solution;
  Vector<double> system_rhs;
 
Definition fe_q.h:554
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

We allocate all structures required for the preCICE coupling: The map is used to apply Dirichlet boundary conditions and filled in the Adapter class with data from the other participant. The CouplingParameters hold the preCICE configuration as described above. The interface boundary ID is the ID associated to our coupling interface and needs to be specified, when we set up the Adapter class object, because we pass it directly to the Constructor of this class.

  std::map<types::global_dof_index, double> boundary_data;
  CouplingParamters parameters;
  const types::boundary_id interface_boundary_id;
  Adapter<dim, CouplingParamters> adapter;
 

The time-step size delta_t is the actual time-step size used for all computations. The preCICE time-step size is obtained by preCICE in order to ensure a synchronization at all coupling time steps. The solver time step-size is the desired time-step size of our individual solver. In more sophisticated computations, it might be determined adaptively. The time_step counter is just used for the time-step number.

  double delta_t;
  double precice_delta_t;
  const double solver_delta_t = 0.1;
  unsigned int time_step = 0;
  };
 
 
 
  template <int dim>
  class RightHandSide : public Function<dim>
  {
  public:
  virtual double
  value(const Point<dim> &p, const unsigned int component = 0) const override;
  };
 
 
 
  template <int dim>
  class BoundaryValues : public Function<dim>
  {
  public:
  virtual double
  value(const Point<dim> &p, const unsigned int component = 0) const override;
  };
 
  template <int dim>
  double
  RightHandSide<dim>::value(const Point<dim> &p,
  const unsigned int /*component*/) const
  {
  double return_value = 0.0;
  for (unsigned int i = 0; i < dim; ++i)
  return_value += 4.0 * std::pow(p(i), 4.0);
 
  return return_value;
  }
 
 
  template <int dim>
  double
  BoundaryValues<dim>::value(const Point<dim> &p,
  const unsigned int /*component*/) const
  {
  return p.square();
  }
 
 
 
  template <int dim>
  CoupledLaplaceProblem<dim>::CoupledLaplaceProblem()
  : fe(1)
  , dof_handler(triangulation)
  , interface_boundary_id(1)
  , adapter(parameters, interface_boundary_id)
  {}
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::make_grid()
  {
 
  for (const auto &cell : triangulation.active_cell_iterators())
  for (const auto &face : cell->face_iterators())
  {
virtual RangeNumberType value(const Point< dim > &p, const unsigned int component=0) const
Definition point.h:111
void refine_global(const unsigned int times=1)
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)
::VectorizedArray< Number, width > pow(const ::VectorizedArray< Number, width > &, const Number p)

We choose the boundary in positive x direction for the interface coupling.

  if (face->at_boundary() && (face->center()[0] == 1))
  face->set_boundary_id(interface_boundary_id);
  }
 
  std::cout << " Number of active cells: " << triangulation.n_active_cells()
  << std::endl
  << " Total number of cells: " << triangulation.n_cells()
  << std::endl;
  }
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::setup_system()
  {
  dof_handler.distribute_dofs(fe);
 
  std::cout << " Number of degrees of freedom: " << dof_handler.n_dofs()
  << std::endl;
 
  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  DoFTools::make_sparsity_pattern(dof_handler, dsp);
  sparsity_pattern.copy_from(dsp);
 
  system_matrix.reinit(sparsity_pattern);
 
  solution.reinit(dof_handler.n_dofs());
  old_solution.reinit(dof_handler.n_dofs());
  system_rhs.reinit(dof_handler.n_dofs());
  }
 
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::assemble_system()
  {
unsigned int n_active_cells() const
unsigned int n_cells() const
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)

Reset global structures

  system_rhs = 0;
  system_matrix = 0;

Update old solution values

  old_solution = solution;
 
  QGauss<dim> quadrature_formula(fe.degree + 1);
 
  RightHandSide<dim> right_hand_side;
 
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

The solution values from previous time steps are stored for each quadrature point

  std::vector<double> local_values_old_solution(fe_values.n_quadrature_points);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  {
  fe_values.reinit(cell);
  cell_matrix = 0;
  cell_rhs = 0;

Get the local values from the ‘fe_values’ object

  fe_values.get_function_values(old_solution, local_values_old_solution);
 

The system matrix contains additionally a mass matrix due to the time discretization. The RHS has contributions from the old solution values.

  for (const unsigned int q_index : fe_values.quadrature_point_indices())
  for (const unsigned int i : fe_values.dof_indices())
  {
  for (const unsigned int j : fe_values.dof_indices())
  cell_matrix(i, j) +=
  ((fe_values.shape_value(i, q_index) * // phi_i(x_q)
  fe_values.shape_value(j, q_index)) + // phi_j(x_q)
  (delta_t * // delta t
  fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
  fe_values.shape_grad(j, q_index))) * // grad phi_j(x_q)
  fe_values.JxW(q_index); // dx
 
  const auto x_q = fe_values.quadrature_point(q_index);
  const auto &local_value = local_values_old_solution[q_index];
  cell_rhs(i) += ((delta_t * // delta t
  fe_values.shape_value(i, q_index) * // phi_i(x_q)
  right_hand_side.value(x_q)) + // f(x_q)
  fe_values.shape_value(i, q_index) *
  local_value) * // phi_i(x_q)*val
  fe_values.JxW(q_index); // dx
  }
 

Copy local to global

  cell->get_dof_indices(local_dof_indices);
  for (const unsigned int i : fe_values.dof_indices())
  {
  for (const unsigned int j : fe_values.dof_indices())
  system_matrix.add(local_dof_indices[i],
  local_dof_indices[j],
  cell_matrix(i, j));
 
  system_rhs(local_dof_indices[i]) += cell_rhs(i);
  }
  }
  {

At first, we apply the Dirichlet boundary condition from step-4, as usual.

  std::map<types::global_dof_index, double> boundary_values;
  0,
  BoundaryValues<dim>(),
  boundary_values);
  system_matrix,
  solution,
  system_rhs);
  }
  {
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

Afterwards, we apply the coupling boundary condition. The boundary_data has already been filled by preCICE.

  system_matrix,
  solution,
  system_rhs);
  }
  }
 
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::solve()
  {
  SolverControl solver_control(1000, 1e-12);
  SolverCG<Vector<double>> solver(solver_control);
  solver.solve(system_matrix, solution, system_rhs, PreconditionIdentity());
 
  std::cout << " " << solver_control.last_step()
  << " CG iterations needed to obtain convergence." << std::endl;
  }
 
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::output_results() const
  {
  DataOut<dim> data_out;
 
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(solution, "solution");
 
  data_out.build_patches(mapping);
 
  std::ofstream output("solution-" + std::to_string(time_step) + ".vtk");
  data_out.write_vtk(output);
  }
 
 
 
  template <int dim>
  void
  CoupledLaplaceProblem<dim>::run()
  {
  std::cout << "Solving problem in " << dim << " space dimensions."
  << std::endl;
 
  make_grid();
  setup_system();
 
void attach_dof_handler(const DoFHandler< dim, spacedim > &)

After we set up the system, we initialize preCICE and the adapter using the functionalities of the Adapter.

  adapter.initialize(dof_handler, boundary_data, mapping);
 

preCICE steers the coupled simulation: isCouplingOngoing is used to synchronize the end of the simulation with the coupling partner

  while (adapter.precice.isCouplingOngoing())
  {

The time step number is solely used to generate unique output files

  ++time_step;

preCICE returns the maximum admissible time-step size, which needs to be compared to our desired solver time-step size.

  precice_delta_t = adapter.precice.getMaxTimeStepSize();
  delta_t = std::min(precice_delta_t, solver_delta_t);
::VectorizedArray< Number, width > min(const ::VectorizedArray< Number, width > &, const ::VectorizedArray< Number, width > &)

Next we read data. Since we use a fully backward Euler method, we want the data to be associated to the end of the current time-step (delta_t) Time-interpolation methods in preCICE allow to get readData at any point in time, if the coupling scheme allows it

  adapter.read_data(delta_t, boundary_data);
 

In the time loop, we assemble the coupled system and solve it as usual.

  assemble_system();
  solve();
 

After we solved the system, we advance the coupling to the next time level. In a bi-directional coupled simulation, we would pass our calculated data to preCICE

  adapter.advance(delta_t);
 

Write an output file if the time step is completed. In case of an implicit coupling, where individual time steps are computed more than once, the function isTimeWindowCompleted prevents unnecessary result writing. For this simple tutorial configuration (explicit coupling), the function returns always true.

  if (adapter.precice.isTimeWindowComplete())
  output_results();
  }
  }
 
 
 
  int
  main()
  {
  CoupledLaplaceProblem<2> laplace_problem;
  laplace_problem.run();
 
  return 0;
  }

Annotated version of fancy_boundary_condition.cc

  /* -----------------------------------------------------------------------------
*   *
*   * SPDX-License-Identifier: LGPL-2.1-or-later
*   * Copyright (C) 2020 by David Schneider
*   * Copyright (C) 2020 by Benjamin Uekermann
*   *
*   * This file is part of the deal.II code gallery.
*   *
*   * -----------------------------------------------------------------------------
*   */
 

This program does not use any deal.II functionality and depends only on preCICE and the standard libraries.

  #include <precice/precice.hpp>
 
  #include <iostream>
  #include <sstream>
  #include <vector>
 

The program computes a time-varying parabolic boundary condition, which is passed to preCICE and serves as Dirichlet boundary condition for the other coupling participant.

Function to generate boundary values in each time step

  void
  define_boundary_values(std::vector<double> &boundary_data,
  const double time,
  const double end_time)
  {

Scale the current time value

  const double relative_time = time / end_time;

Define the amplitude. Values run from -0.5 to 0.5

  const double amplitude = (relative_time - 0.5);

Specify the actual data we want to pass to the other participant. Here, we choose a parabola with boundary values 2 in order to enforce continuity to adjacent boundaries.

  const double n_elements = boundary_data.size();
  const double right_zero = boundary_data.size() - 1;
  const double left_zero = 0;
  const double offset = 2;
  for (uint i = 0; i < n_elements; ++i)
  boundary_data[i] =
  -amplitude * ((i - left_zero) * (i - right_zero)) + offset;
  }
 
 
  int
  main()
  {
  std::cout << "Boundary participant: starting... \n";
 

Configuration

  const std::string configFileName("precice-config.xml");
  const std::string solverName("boundary-participant");
  const std::string meshName("boundary-mesh");
  const std::string dataWriteName("boundary-data");
 

Adjust to MPI rank and size for parallel computation

  const int commRank = 0;
  const int commSize = 1;
 
  precice::Participant precice(solverName, configFileName, commRank, commSize);
 
  const int dimensions = precice.getMeshDimensions(meshName);
  const int numberOfVertices = 6;
 

Set up data structures

  std::vector<double> writeData(numberOfVertices);
  std::vector<int> vertexIDs(numberOfVertices);
  std::vector<double> vertices(numberOfVertices * dimensions);
 
Point< 3 > vertices[4]

Define a boundary mesh

  std::cout << "Boundary participant: defining boundary mesh \n";
  const double length = 2;
  const double xCoord = 1;
  const double deltaY = length / (numberOfVertices - 1);
  for (int i = 0; i < numberOfVertices; ++i)
  for (int j = 0; j < dimensions; ++j)
  {
  const unsigned int index = dimensions * i + j;

The x-coordinate is always 1, i.e., the boundary is parallel to the y-axis. The y-coordinate is descending from 1 to -1.

  if (j == 0)
  vertices[index] = xCoord;
  else
  vertices[index] = 1 - deltaY * i;
  }
 

Pass the vertices to preCICE

  precice.setMeshVertices(meshName, vertices, vertexIDs);
 

Variables for the time

  const double end_time = 1;
  double time = 0;
 

Not used in the configuration by default

  if (precice.requiresInitialData())
  {
  std::cout << "Boundary participant: writing initial data \n";
  define_boundary_values(writeData, time, end_time);
  precice.writeData(meshName, dataWriteName, vertexIDs, writeData);
  }
 

initialize the Participant

  precice.initialize();
 

Start time loop

  while (precice.isCouplingOngoing())
  {
  double dt = precice.getMaxTimeStepSize();
  time += dt;
 

Generate new boundary data

  define_boundary_values(writeData, time, end_time);
 
  std::cout << "Boundary participant: writing coupling data \n";
  precice.writeData(meshName, dataWriteName, vertexIDs, writeData);
 
  std::cout << "Boundary participant: advancing in time\n";
  precice.advance(dt);
  }
 
  std::cout << "Boundary participant: closing...\n";
 
  return 0;
  }