This program was contributed by Umair Hussain <husain.umair2010@gmail.com>.
It comes without any warranty or support by its authors or the authors of deal.II.
This program is part of the deal.II code gallery and consists of the following files (click to inspect):
Pictures from this code gallery program
Annotated version of README.md
Crystal solidification using phase field modeling
Overview
This code solves the solidification problem based in the famous work by Ryo Kobayashi (1993) [1]. The model is based on the Allen-Cahn [2] phase field equation coupled with the transient heat equation Though we have covered only the isotropic directional solidification from the paper in the results, the same code can be modified and used for other types of solidification problems. Let us quickly go through the governing equations and the boundary conditions solved in this problem.
Problem Definition
\begin{align*}
\tau \frac{\partial p}{\partial t} = \nabla \cdot( \left(\epsilon^2\right)\nabla p) + p(1-p)(p-\frac{1}{2}+m) \label{heateq} \\
\frac{\partial T}{\partial t}=\nabla^2T+K\frac{\partial p}{\partial t}
\end{align*}
where \(m(T) = \frac{a}{\pi}\tan^{-1}(\gamma(T_e-T))\)
The problem is subjected to the boundary conditions:
\begin{align*}
p(0,y,t)= 1 \\
T(0,y,t)= T_\gamma -\Delta T
\end{align*}
and the initial conditions:
\begin{align*}
p(x,y,0)= 0 \\
T(x,y,0)= T_\gamma -\Delta T
\end{align*}
Here, \(\Delta T\) is the degree of undercooling.
Dendritic Growth
Using this code, we have reproduced one of the study from Kobayashi's work regarding the dendritic behaviour during directional solidification. The latent heat parameter 'K' in the equation determines the amount of heat released as the phase solidifies. If this value is high enough, we would observe an unstable interface between the solid and liquid phase, which would lead to the formation of dendrites as shown in these images. To assist this growth we need to add a random perturbation term ' \(a \chi p (1-p)\)' to the dynamic term of the phase field equation.
References
[1] Kobayashi, R. (1993). Modeling and numerical simulations of dendritic crystal growth. Physica D: Nonlinear Phenomena, 63(3–4), 410–423. https://doi.org/10.1016/0167-2789(93)90120-P
[2] Allen, S. M., & Cahn, J. W. (1979). A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metallurgica, 27(6), 1085–1095. https://doi.org/10.1016/0001-6160(79)90196-2
Annotated version of InitialValues.cpp
#include "PhaseFieldSolver.h"
void InitialValues::vector_value(
const Point<2> &p,
{
values(0)= 0.0;
values(1)= 0.2;
}
Annotated version of PhaseFieldSolver.cpp
#include "PhaseFieldSolver.h"
PhaseFieldSolver::PhaseFieldSolver()
: mpi_communicator(MPI_COMM_WORLD)
, n_mpi_processes(
Utilities::
MPI::n_mpi_processes(mpi_communicator))
, this_mpi_process(
Utilities::
MPI::this_mpi_process(mpi_communicator))
, pcout(
std::cout, (this_mpi_process == 0))
, time(0.0)
, final_time(1.)
, time_step(.0002)
, theta(0.5)
, epsilon(0.01)
, tau(0.0003)
, gamma(10.)
, latent_heat(1.4)
, alpha(0.9)
, t_eq(1.)
, a(0.01)
{}
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
Annotated version of PhaseFieldSolver.h
#ifndef KOBAYASHI_PARALLEL_PHASEFIELDSOLVER_H
#define KOBAYASHI_PARALLEL_PHASEFIELDSOLVER_H
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/utilities.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/sparse_matrix.h>
#include <deal.II/lac/sparse_direct.h>
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/solver_cg.h>
#include <deal.II/lac/precondition.h>
#include <deal.II/lac/affine_constraints.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/grid/grid_in.h>
For Parallel Computation
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/mpi.h>
#include <deal.II/lac/petsc_vector.h>
#include <deal.II/lac/petsc_sparse_matrix.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_precondition.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <deal.II/distributed/solution_transfer.h>
#include <fstream>
#include <iostream>
class PhaseFieldSolver {
public:
PhaseFieldSolver();
void run();
private:
void make_grid_and_dofs();
void assemble_system();
void solve();
void output_results(const unsigned int timestep_number) const;
double compute_residual();
void applying_bc();
float get_random_number();
const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
double time;
const double final_time, time_step;
const double theta;
const double epsilon, tau, gamma, latent_heat, alpha, t_eq, a;
};
Initial values class
class InitialValues :
public Function<2>
{
public:
{}
virtual void vector_value(const
Point<2> &p,
Vector<double> & value) const override;
};
#endif
Annotated version of applying_bc.cpp
#include "PhaseFieldSolver.h"
void PhaseFieldSolver::applying_bc(){
quadrature_formula,
std::map<types::global_dof_index,double> boundary_values;
std::vector< bool > component_mask
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
Prescribing p=1 at the left face (this will be maintained in the subsequent iterations when zero BC is applied in the Newton-Raphson iterations)
1,
boundary_values,p_mask);
To apply the boundary values only to the solution vector without the Jacobian Matrix and RHS Vector
for (auto &boundary_value : boundary_values)
old_solution(boundary_value.
first) = boundary_value.
second;
}
Annotated version of assemble_system.cpp
#include "PhaseFieldSolver.h"
#include <cmath>
void PhaseFieldSolver::assemble_system() {
Separating each variable as a scalar to easily call the respective shape functions
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
To copy values and gradients of solution from previous iteration Old Newton iteration
std::vector<Tensor<1, 2>> old_newton_solution_gradients_p(n_q_points);
std::vector<double> old_newton_solution_values_p(n_q_points);
std::vector<Tensor<1, 2>> old_newton_solution_gradients_t(n_q_points);
std::vector<double> old_newton_solution_values_t(n_q_points);
Old time step iteration
std::vector<Tensor<1, 2>> old_time_solution_gradients_p(n_q_points);
std::vector<double> old_time_solution_values_p(n_q_points);
std::vector<Tensor<1, 2>> old_time_solution_gradients_t(n_q_points);
std::vector<double> old_time_solution_values_t(n_q_points);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
jacobian_matrix.operator=(0.0);
system_rhs.operator=(0.0);
for (const auto &cell : dof_handler.active_cell_iterators()){
if (cell->subdomain_id() == this_mpi_process) {
cell_matrix = 0;
cell_rhs = 0;
fe_values.reinit(cell);
Copying old solution values
fe_values[phase_parameter].get_function_values(conv_solution_np,old_newton_solution_values_p);
fe_values[phase_parameter].get_function_gradients(conv_solution_np,old_newton_solution_gradients_p);
fe_values[temperature].get_function_values(conv_solution_np,old_newton_solution_values_t);
fe_values[temperature].get_function_gradients(conv_solution_np,old_newton_solution_gradients_t);
fe_values[phase_parameter].get_function_values(old_solution_np,old_time_solution_values_p);
fe_values[phase_parameter].get_function_gradients(old_solution_np,old_time_solution_gradients_p);
fe_values[temperature].get_function_values(old_solution_np,old_time_solution_values_t);
fe_values[temperature].get_function_gradients(old_solution_np,old_time_solution_gradients_t);
for (unsigned int q = 0; q < n_q_points; ++q){
double khi = get_random_number();
Old solution values
double p_on = old_newton_solution_values_p[q];
auto grad_p_on = old_newton_solution_gradients_p[q];
double p_ot = old_time_solution_values_p[q];
auto grad_p_ot = old_time_solution_gradients_p[q];
double t_on = old_newton_solution_values_t[q];
auto grad_t_on = old_newton_solution_gradients_t[q];
double t_ot = old_time_solution_values_t[q];
auto grad_t_ot = old_time_solution_gradients_t[q];
for (unsigned int i = 0; i < dofs_per_cell; ++i){
Shape Functions
double psi_i = fe_values[phase_parameter].value(i,q);
auto grad_psi_i = fe_values[phase_parameter].gradient(i,q);
double phi_i = fe_values[temperature].value(i,q);
auto grad_phi_i = fe_values[temperature].gradient(i,q);
for (unsigned int j = 0; j < dofs_per_cell; ++j){
Shape Functions
double psi_j = fe_values[phase_parameter].value(j,q);
auto grad_psi_j = fe_values[phase_parameter].gradient(j,q);
double phi_j = fe_values[temperature].value(j,q);
auto grad_phi_j = fe_values[temperature].gradient(j,q);
double mp = psi_i*(tau*psi_j);
double kp = grad_psi_i*(
std::pow(epsilon,2)*grad_psi_j);
double m = (alpha/M_PI)*
std::atan(gamma*(t_eq - t_on));
double t1 = (1-p_on)*(p_on-0.5+m);
double t2 = -(p_on)*(p_on-0.5+m);
double t3 = (p_on)*(1-p_on);
double nl_p = psi_i*((t1+t2+t3)*psi_j);
::VectorizedArray< Number, width > pow(const ::VectorizedArray< Number, width > &, const Number p)
inline ::VectorizedArray< Number, width > atan(const ::VectorizedArray< Number, width > &x)
Adding random noise at the interface
nl_p -= a*khi*psi_i*((1.0 - 2*(p_on))*psi_j);
double f1_p= mp + time_step*theta*kp - time_step*theta*nl_p;
double t4 = (p_on)*(1-p_on)*(-(alpha*gamma/(M_PI*(1+
std::pow((gamma*(t_eq-t_on)),2)))));
double nl_t = psi_i*(t4*phi_j);
double f1_t = -time_step*theta*nl_t;
double mpt = phi_i*(latent_heat*psi_j);
double f2_p = -mpt;
double mt = phi_i*(phi_j);
double kt = grad_phi_i*(grad_phi_j);
double f2_t = mt + time_step*theta*kt;
Assembling Jacobian matrix
cell_matrix(i,j) += (f1_p + f1_t + f2_p + f2_t)*fe_values.JxW(q);
}
Finding f1 and f2 at previous iteration for rhs vector
double mp_n = psi_i*(tau*p_on);
double kp_n = grad_psi_i*(
std::pow(epsilon,2)*grad_p_on);
double m_n = (alpha/M_PI)*
std::atan(gamma*(t_eq-t_on));
double nl_n = psi_i*((p_on)*(1-p_on)*(p_on-0.5+m_n));
double mp_t = psi_i*(tau*p_ot);
double kp_t = grad_psi_i*(tau*grad_p_ot);
double m_t = (alpha/M_PI)*
std::atan(gamma*(t_eq-t_ot));
double nl_t = psi_i*(p_ot)*(1-p_ot)*(p_ot-0.5+m_t);
Adding random noise at the interface
nl_n -= psi_i*(a*khi*(p_on)*(1-p_on));
nl_t -= psi_i*(a*khi*(p_ot)*(1-p_ot));
double f1n = mp_n + time_step*theta*kp_n - time_step*theta*nl_n - mp_t + time_step*(1-theta)*kp_t - time_step*(1-theta)*nl_t;
double mt_n = phi_i*(t_on);
double kt_n = grad_phi_i*(grad_t_on);
double mpt_n = phi_i*(latent_heat*p_on);
double mt_t = phi_i*(t_ot);
double kt_t = grad_phi_i*(grad_t_ot);
double mpt_t = phi_i*(latent_heat*p_ot);
double f2n = mt_n + time_step*theta*kt_n - mpt_n - mt_t + time_step*(1-theta)*kt_t + mpt_t;
Assembling RHS vector
cell_rhs(i) -= (f1n + f2n)*fe_values.JxW(q);
}
}
cell->get_dof_indices(local_dof_indices);
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
jacobian_matrix.add(local_dof_indices[i],
local_dof_indices[j],
system_rhs(local_dof_indices[i]) += cell_rhs(i);
}
}
}
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Applying zero BC
std::map<types::global_dof_index, double> boundary_values;
1,
boundary_values);
jacobian_matrix,
solution_update,
system_rhs, false);
}
Annotated version of get_random_number.cpp
#include "PhaseFieldSolver.h"
#include <random>
float PhaseFieldSolver::get_random_number()
{
static std::default_random_engine e;
static std::uniform_real_distribution<> dis(-0.5, 0.5);
return dis(e);
}
Annotated version of grid_dof.cpp
#include "PhaseFieldSolver.h"
void PhaseFieldSolver::make_grid_and_dofs() {
Reading mesh
std::ifstream f("mesh/Kobayashi_mesh100x400.msh");
gridin.read_msh(f);
dof_handler.distribute_dofs(fe);
const std::vector<IndexSet> locally_owned_dofs_per_proc =
locally_owned_dofs_per_proc[this_mpi_process];
jacobian_matrix.reinit(locally_owned_dofs,
locally_owned_dofs,
dsp,
mpi_communicator);
old_solution.reinit(locally_owned_dofs, mpi_communicator);
system_rhs.reinit(locally_owned_dofs, mpi_communicator);
conv_solution.reinit(locally_owned_dofs, mpi_communicator);
solution_update.reinit(locally_owned_dofs, mpi_communicator);
conv_solution_np.reinit(dof_handler.n_dofs());
old_solution_np.reinit(dof_handler.n_dofs());
}
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
void subdomain_wise(DoFHandler< dim, spacedim > &dof_handler)
Annotated version of main.cpp
#include <iostream>
#include "PhaseFieldSolver.h"
int main(int argc, char **argv) {
PhaseFieldSolver phasefieldsolver;
phasefieldsolver.run();
return 0;
}
Annotated version of output_results.cpp
#include "PhaseFieldSolver.h"
void PhaseFieldSolver::output_results(const unsigned int timestep_number) const {
using only one process to output the result
if (this_mpi_process == 0)
{
std::vector<std::string> solution_names;
solution_names.emplace_back ("p");
solution_names.emplace_back ("T");
data_out.add_data_vector(localized_solution, solution_names);
const std::string filename =
DataOutBase::VtkFlags::ZlibCompressionLevel::best_speed;
data_out.set_flags(vtk_flags);
std::ofstream output(filename);
data_out.build_patches();
data_out.write_vtk(output);
}
}
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
DataOutBase::CompressionLevel compression_level
Annotated version of run.cpp
#include "PhaseFieldSolver.h"
#include <time.h>
void PhaseFieldSolver::run() {
make_grid_and_dofs();
pcout << "Processors used: " << n_mpi_processes << std::endl;
pcout << " Number of degrees of freedom: " << dof_handler.n_dofs()
<< " (by partition:";
for (unsigned int p = 0; p < n_mpi_processes; ++p)
pcout << (p == 0 ? ' ' : '+')
<< (
DoFTools::count_dofs_with_subdomain_association(dof_handler,
p));
pcout << ")" << std::endl;
Initialise the solution
InitialValues initial_value;
initial_value,
old_solution);
initial_value,
conv_solution);
Applying Boundary Conditions at t=0
Plotting initial solution
Time steps begin here:
unsigned int timestep_number = 1;
for (; time <= final_time; time += time_step, ++timestep_number) {
pcout << "Time step " << timestep_number << " at t=" << time+time_step
<< std::endl;
conv_solution.operator=(old_solution);
Newton-Raphson iterations begin here:
for (unsigned int it = 1; it <= 100; ++it) {
pcout << "Newton iteration number:" << it << std::endl;
if (it == 100) {
pcout << "Convergence Failure!!!!!!!!!!!!!!!" << std::endl;
std::exit(0);
}
Saving parallel vectors as non-parallel ones
conv_solution_np = conv_solution;
old_solution_np = old_solution;
Initialise the delta solution as zero
Assemble Jacobian and Residual
Solving to get delta solution
Checking for convergence
double residual_norm = system_rhs.l2_norm();
pcout << "Nothing wrong till here!!!!!!" << std::endl;
pcout << "the residual is:" << residual_norm << std::endl;
if (residual_norm <= (1e-4)) {
pcout << "Solution Converged!" << std::endl;
break;
}
}
Transfer the converged solution to the old_solution vector to plot output
old_solution.operator=(conv_solution);
output the solution at only specific number of time steps
if (timestep_number%10 == 0)
output_results(timestep_number);
}
}
Annotated version of solve.cpp
#include "PhaseFieldSolver.h"
void PhaseFieldSolver::solve(){
Using a direct parallel solver
A_direct.solve(jacobian_matrix, solution_update, system_rhs);
Updating the solution by adding the delta solution
conv_solution.add(1, solution_update);
}