540 * <a name=
"ThecodeMinimalSurfaceProblemcodeclassimplementation"></a>
541 * <h3>The <code>MinimalSurfaceProblem</code>
class implementation</h3>
546 * <a name=
"MinimalSurfaceProblemMinimalSurfaceProblem"></a>
547 * <h4>MinimalSurfaceProblem::MinimalSurfaceProblem</h4>
551 * The constructor and destructor of the
class are the same as in the
first
559 * MinimalSurfaceProblem<dim>::MinimalSurfaceProblem()
567 * MinimalSurfaceProblem<dim>::~MinimalSurfaceProblem()
569 * dof_handler.
clear();
575 * <a name=
"MinimalSurfaceProblemsetup_system"></a>
576 * <h4>MinimalSurfaceProblem::setup_system</h4>
580 * As
always in the setup-system
function, we setup the variables of the
581 * finite element method. There are same differences to @ref step_6
"step-6", because
582 * there we start solving the PDE from scratch in every refinement cycle
583 * whereas here we need to take the solution from the previous mesh onto the
584 * current mesh. Consequently, we can
't just reset solution vectors. The
585 * argument passed to this function thus indicates whether we can
586 * distributed degrees of freedom (plus compute constraints) and set the
587 * solution vector to zero or whether this has happened elsewhere already
588 * (specifically, in <code>refine_mesh()</code>).
595 * void MinimalSurfaceProblem<dim>::setup_system(const bool initial_step)
599 * dof_handler.distribute_dofs(fe);
600 * present_solution.reinit(dof_handler.n_dofs());
602 * hanging_node_constraints.clear();
603 * DoFTools::make_hanging_node_constraints(dof_handler,
604 * hanging_node_constraints);
605 * hanging_node_constraints.close();
611 * The remaining parts of the function are the same as in @ref step_6 "step-6".
617 * newton_update.reinit(dof_handler.n_dofs());
618 * system_rhs.reinit(dof_handler.n_dofs());
620 * DynamicSparsityPattern dsp(dof_handler.n_dofs());
621 * DoFTools::make_sparsity_pattern(dof_handler, dsp);
623 * hanging_node_constraints.condense(dsp);
625 * sparsity_pattern.copy_from(dsp);
626 * system_matrix.reinit(sparsity_pattern);
632 * <a name="MinimalSurfaceProblemassemble_system"></a>
633 * <h4>MinimalSurfaceProblem::assemble_system</h4>
637 * This function does the same as in the previous tutorials except that now,
638 * of course, the matrix and right hand side functions depend on the
639 * previous iteration's solution. As discussed in the introduction, we need
640 * to use
zero boundary values
for the Newton updates; we compute them at
641 * the
end of
this function.
645 * The top of the
function contains the usual boilerplate code, setting up
646 * the objects that allow us to evaluate shape
functions at quadrature
647 * points and temporary storage locations
for the local matrices and
648 * vectors, as well as
for the gradients of the previous solution at the
649 * quadrature points. We then start the
loop over all cells:
653 *
void MinimalSurfaceProblem<dim>::assemble_system()
655 *
const QGauss<dim> quadrature_formula(fe.degree + 1);
661 * quadrature_formula,
665 *
const unsigned int dofs_per_cell = fe.dofs_per_cell;
666 *
const unsigned int n_q_points = quadrature_formula.size();
671 * std::vector<Tensor<1, dim>> old_solution_gradients(n_q_points);
673 * std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
675 *
for (
const auto &cell : dof_handler.active_cell_iterators())
680 * fe_values.reinit(cell);
684 * For the assembly of the linear system, we have to obtain the values
685 * of the previous solution
's gradients at the quadrature
686 * points. There is a standard way of doing this: the
687 * FEValues::get_function_gradients function takes a vector that
688 * represents a finite element field defined on a DoFHandler, and
689 * evaluates the gradients of this field at the quadrature points of the
690 * cell with which the FEValues object has last been reinitialized.
691 * The values of the gradients at all quadrature points are then written
692 * into the second argument:
695 * fe_values.get_function_gradients(present_solution,
696 * old_solution_gradients);
700 * With this, we can then do the integration loop over all quadrature
701 * points and shape functions. Having just computed the gradients of
702 * the old solution in the quadrature points, we are able to compute
703 * the coefficients @f$a_{n}@f$ in these points. The assembly of the
704 * system itself then looks similar to what we always do with the
705 * exception of the nonlinear terms, as does copying the results from
706 * the local objects into the global ones:
709 * for (unsigned int q = 0; q < n_q_points; ++q)
711 * const double coeff =
712 * 1.0 / std::sqrt(1 + old_solution_gradients[q] *
713 * old_solution_gradients[q]);
715 * for (unsigned int i = 0; i < dofs_per_cell; ++i)
717 * for (unsigned int j = 0; j < dofs_per_cell; ++j)
718 * cell_matrix(i, j) +=
719 * (((fe_values.shape_grad(i, q) // ((\nabla \phi_i
721 * * fe_values.shape_grad(j, q)) // * \nabla \phi_j)
723 * (fe_values.shape_grad(i, q) // (\nabla \phi_i
724 * * coeff * coeff * coeff // * a_n^3
725 * * (fe_values.shape_grad(j, q) // * (\nabla \phi_j
726 * * old_solution_gradients[q]) // * \nabla u_n)
727 * * old_solution_gradients[q])) // * \nabla u_n)))
728 * * fe_values.JxW(q)); // * dx
730 * cell_rhs(i) -= (fe_values.shape_grad(i, q) // \nabla \phi_i
732 * * old_solution_gradients[q] // * u_n
733 * * fe_values.JxW(q)); // * dx
737 * cell->get_dof_indices(local_dof_indices);
738 * for (unsigned int i = 0; i < dofs_per_cell; ++i)
740 * for (unsigned int j = 0; j < dofs_per_cell; ++j)
741 * system_matrix.add(local_dof_indices[i],
742 * local_dof_indices[j],
743 * cell_matrix(i, j));
745 * system_rhs(local_dof_indices[i]) += cell_rhs(i);
751 * Finally, we remove hanging nodes from the system and apply zero
752 * boundary values to the linear system that defines the Newton updates
756 * hanging_node_constraints.condense(system_matrix);
757 * hanging_node_constraints.condense(system_rhs);
759 * std::map<types::global_dof_index, double> boundary_values;
760 * VectorTools::interpolate_boundary_values(dof_handler,
762 * Functions::ZeroFunction<dim>(),
764 * MatrixTools::apply_boundary_values(boundary_values,
775 * <a name="MinimalSurfaceProblemsolve"></a>
776 * <h4>MinimalSurfaceProblem::solve</h4>
780 * The solve function is the same as always. At the end of the solution
781 * process we update the current solution by setting
782 * @f$u^{n+1}=u^n+\alpha^n\;\delta u^n@f$.
786 * void MinimalSurfaceProblem<dim>::solve()
788 * SolverControl solver_control(system_rhs.size(),
789 * system_rhs.l2_norm() * 1e-6);
790 * SolverCG<Vector<double>> solver(solver_control);
792 * PreconditionSSOR<SparseMatrix<double>> preconditioner;
793 * preconditioner.initialize(system_matrix, 1.2);
795 * solver.solve(system_matrix, newton_update, system_rhs, preconditioner);
797 * hanging_node_constraints.distribute(newton_update);
799 * const double alpha = determine_step_length();
800 * present_solution.add(alpha, newton_update);
807 * <a name="MinimalSurfaceProblemrefine_mesh"></a>
808 * <h4>MinimalSurfaceProblem::refine_mesh</h4>
812 * The first part of this function is the same as in @ref step_6 "step-6"... However,
813 * after refining the mesh we have to transfer the old solution to the new
814 * one which we do with the help of the SolutionTransfer class. The process
815 * is slightly convoluted, so let us describe it in detail:
819 * void MinimalSurfaceProblem<dim>::refine_mesh()
821 * Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
823 * KellyErrorEstimator<dim>::estimate(
825 * QGauss<dim - 1>(fe.degree + 1),
826 * std::map<types::boundary_id, const Function<dim> *>(),
828 * estimated_error_per_cell);
830 * GridRefinement::refine_and_coarsen_fixed_number(triangulation,
831 * estimated_error_per_cell,
837 * Then we need an additional step: if, for example, you flag a cell that
838 * is once more refined than its neighbor, and that neighbor is not
839 * flagged for refinement, we would end up with a jump of two refinement
840 * levels across a cell interface. To avoid these situations, the library
841 * will silently also have to refine the neighbor cell once. It does so by
842 * calling the Triangulation::prepare_coarsening_and_refinement function
843 * before actually doing the refinement and coarsening. This function
844 * flags a set of additional cells for refinement or coarsening, to
845 * enforce rules like the one-hanging-node rule. The cells that are
846 * flagged for refinement and coarsening after calling this function are
847 * exactly the ones that will actually be refined or coarsened. Usually,
848 * you don't have to
do this by hand
851 * needs to know the
final set of cells that will be coarsened or refined
852 * in order to store the data from the old mesh and transfer to the
new
853 *
one. Thus, we
call the
function by hand:
861 * the present
DoFHandler and attach the solution vector to it, followed
862 * by doing the actual refinement and distribution of degrees of freedom
867 * solution_transfer.prepare_for_coarsening_and_refinement(present_solution);
871 * dof_handler.distribute_dofs(fe);
875 * Finally, we retrieve the old solution interpolated to the
new
877 * values of the old solution, but rather indices, we need to
preserve the
878 * old solution vector until we have gotten the
new interpolated
879 * values. Thus, we have the
new values written into a temporary vector,
880 * and only afterwards write them into the solution vector
object. Once we
881 * have
this solution we have to make sure that the @f$u^n@f$ we now have
882 * actually has the correct boundary values. As explained at the
end of
883 * the introduction,
this is not automatically the
case even
if the
884 * solution before refinement had the correct boundary values, and so we
885 * have to explicitly make sure that it now has:
889 * solution_transfer.interpolate(present_solution, tmp);
890 * present_solution = tmp;
892 * set_boundary_values();
896 * On the
new mesh, there are different hanging nodes, which we have to
897 * compute again. To ensure there are no hanging nodes of the old mesh in
898 * the object, it
's first cleared. To be on the safe side, we then also
899 * make sure that the current solution's vector entries satisfy the
900 * hanging node constraints (see the discussion in the documentation of
904 * hanging_node_constraints.clear();
907 * hanging_node_constraints);
908 * hanging_node_constraints.close();
910 * hanging_node_constraints.distribute(present_solution);
914 * We
end the
function by updating all the remaining data structures,
915 * indicating to <code>setup_dofs()</code> that
this is not the
first
916 * go-around and that it needs to
preserve the content of the solution
920 * setup_system(
false);
928 * <a name=
"MinimalSurfaceProblemset_boundary_values"></a>
929 * <h4>MinimalSurfaceProblem::set_boundary_values</h4>
933 * The next
function ensures that the solution vector
's entries respect the
934 * boundary values for our problem. Having refined the mesh (or just
935 * started computations), there might be new nodal points on the
936 * boundary. These have values that are simply interpolated from the
937 * previous mesh (or are just zero), instead of the correct boundary
938 * values. This is fixed up by setting all boundary nodes explicit to the
943 * void MinimalSurfaceProblem<dim>::set_boundary_values()
945 * std::map<types::global_dof_index, double> boundary_values;
946 * VectorTools::interpolate_boundary_values(dof_handler,
948 * BoundaryValues<dim>(),
950 * for (auto &boundary_value : boundary_values)
951 * present_solution(boundary_value.first) = boundary_value.second;
958 * <a name="MinimalSurfaceProblemcompute_residual"></a>
959 * <h4>MinimalSurfaceProblem::compute_residual</h4>
963 * In order to monitor convergence, we need a way to compute the norm of the
964 * (discrete) residual, i.e., the norm of the vector
965 * @f$\left<F(u^n),\varphi_i\right>@f$ with @f$F(u)=-\nabla \cdot \left(
966 * \frac{1}{\sqrt{1+|\nabla u|^{2}}}\nabla u \right)@f$ as discussed in the
967 * introduction. It turns out that (although we don't use
this feature in
968 * the current version of the program)
one needs to compute the residual
969 * @f$\left<
F(u^n+\alpha^n\;\delta u^n),\varphi_i\right>@f$ when determining
970 * optimal step lengths, and so
this is what we implement here: the
function
971 * takes the step length @f$\alpha^n@f$ as an argument. The original
972 * functionality is of course obtained by passing a
zero as argument.
976 * In the
function below, we
first set up a vector
for the residual, and
977 * then a vector
for the evaluation
point @f$u^n+\alpha^n\;\delta u^n@f$. This
978 * is followed by the same boilerplate code we use
for all integration
983 *
double MinimalSurfaceProblem<dim>::compute_residual(
const double alpha)
const
988 * evaluation_point = present_solution;
989 * evaluation_point.add(alpha, newton_update);
991 *
const QGauss<dim> quadrature_formula(fe.degree + 1);
993 * quadrature_formula,
997 *
const unsigned int dofs_per_cell = fe.dofs_per_cell;
998 *
const unsigned int n_q_points = quadrature_formula.size();
1001 * std::vector<Tensor<1, dim>> gradients(n_q_points);
1003 * std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
1005 *
for (
const auto &cell : dof_handler.active_cell_iterators())
1008 * fe_values.reinit(cell);
1012 * The actual computation is much as in
1013 * <code>assemble_system()</code>. We
first evaluate the gradients of
1014 * @f$u^n+\alpha^n\,\delta u^n@f$ at the quadrature points, then compute
1015 * the coefficient @f$a_n@f$, and then plug it all into the formula
for
1019 * fe_values.get_function_gradients(evaluation_point, gradients);
1022 *
for (
unsigned int q = 0; q < n_q_points; ++q)
1024 *
const double coeff = 1 /
std::sqrt(1 + gradients[q] * gradients[q]);
1026 *
for (
unsigned int i = 0; i < dofs_per_cell; ++i)
1030 * * fe_values.JxW(q));
1033 * cell->get_dof_indices(local_dof_indices);
1034 *
for (
unsigned int i = 0; i < dofs_per_cell; ++i)
1040 * At the
end of
this function we also have to deal with the hanging node
1041 * constraints and with the issue of boundary values. With regard to the
1042 * latter, we have to
set to
zero the elements of the residual vector
for
1043 * all entries that correspond to degrees of freedom that sit at the
1044 * boundary. The reason is that because the
value of the solution there is
1045 * fixed, they are of course no
"real" degrees of freedom and so, strictly
1046 * speaking, we shouldn
't have assembled entries in the residual vector
1047 * for them. However, as we always do, we want to do exactly the same
1048 * thing on every cell and so we didn't not want to deal with the question
1049 * of whether a particular degree of freedom sits at the boundary in the
1050 * integration above. Rather, we will simply
set to
zero these entries
1051 * after the fact. To
this end, we
first need to determine which degrees
1052 * of freedom
do in fact belong to the boundary and then
loop over all of
1053 * those and
set the residual entry to
zero. This happens in the following
1054 * lines which we have already seen used in @ref step_11
"step-11":
1057 * hanging_node_constraints.condense(residual);
1059 * std::vector<bool> boundary_dofs(dof_handler.n_dofs());
1063 *
for (
unsigned int i = 0; i < dof_handler.n_dofs(); ++i)
1064 *
if (boundary_dofs[i] ==
true)
1069 * At the
end of the
function, we
return the
norm of the residual:
1072 *
return residual.l2_norm();
1080 * <a name=
"MinimalSurfaceProblemdetermine_step_length"></a>
1081 * <h4>MinimalSurfaceProblem::determine_step_length</h4>
1085 * As discussed in the introduction, Newton
's method frequently does not
1086 * converge if we always take full steps, i.e., compute @f$u^{n+1}=u^n+\delta
1087 * u^n@f$. Rather, one needs a damping parameter (step length) @f$\alpha^n@f$ and
1088 * set @f$u^{n+1}=u^n+\alpha^n\delta u^n@f$. This function is the one called
1089 * to compute @f$\alpha^n@f$.
1093 * Here, we simply always return 0.1. This is of course a sub-optimal
1094 * choice: ideally, what one wants is that the step size goes to one as we
1095 * get closer to the solution, so that we get to enjoy the rapid quadratic
1096 * convergence of Newton's method. We will discuss better strategies below
1097 * in the results section.
1100 *
template <
int dim>
1101 *
double MinimalSurfaceProblem<dim>::determine_step_length() const
1111 * <a name=
"MinimalSurfaceProblemrun"></a>
1116 * In the
run function, we build the
first grid and then have the top-
level
1117 * logic
for the Newton iteration. The
function has two variables,
one that
1118 * indicates whether
this is the
first time we solve
for a Newton update and
1119 *
one that indicates the refinement
level of the mesh:
1122 *
template <
int dim>
1125 *
unsigned int refinement = 0;
1126 *
bool first_step =
true;
1130 * As described in the introduction, the domain is the unit disk around
1131 * the origin, created in the same way as shown in @ref step_6
"step-6". The mesh is
1132 * globally refined twice followed later on by several adaptive cycles:
1140 * The Newton iteration starts next. During the
first step we
do not have
1141 * information about the residual prior to
this step and so we
continue
1142 * the Newton iteration until we have reached at least
one iteration and
1143 * until residual is less than @f$10^{-3}@f$.
1147 * At the beginning of the
loop, we
do a bit of setup work. In the
first
1148 * go around, we compute the solution on the twice globally refined mesh
1149 * after setting up the basic data structures and ensuring that the
first
1150 * Newton iterate already has the correct boundary values. In all
1151 * following mesh refinement loops, the mesh will be refined adaptively.
1154 *
double previous_res = 0;
1155 *
while (first_step || (previous_res > 1
e-3))
1157 *
if (first_step ==
true)
1159 * std::cout <<
"******** Initial mesh "
1160 * <<
" ********" << std::endl;
1162 * setup_system(
true);
1163 * set_boundary_values();
1165 * first_step =
false;
1170 * std::cout <<
"******** Refined mesh " << refinement <<
" ********"
1178 * On every mesh we
do exactly five Newton steps. We print the
initial
1179 * residual here and then start the iterations on
this mesh.
1183 * In every Newton step the system
matrix and the right hand side have
1184 * to be computed
first, after which we store the
norm of the right
1185 * hand side as the residual to check against when deciding whether to
1186 * stop the iterations. We then solve the linear system (the
function
1187 * also updates @f$u^{n+1}=u^n+\alpha^n\;\delta u^n@f$) and output the
1188 * residual at the
end of
this Newton step:
1191 * std::cout <<
" Initial residual: " << compute_residual(0) << std::endl;
1193 *
for (
unsigned int inner_iteration = 0; inner_iteration < 5;
1194 * ++inner_iteration)
1196 * assemble_system();
1197 * previous_res = system_rhs.l2_norm();
1201 * std::cout <<
" Residual: " << compute_residual(0) << std::endl;
1206 * Every fifth iteration, i.e., just before we
refine the mesh again,
1207 * we output the solution as well as the Newton update. This happens
1208 * as in all programs before:
1214 * data_out.add_data_vector(present_solution,
"solution");
1215 * data_out.add_data_vector(newton_update,
"update");
1216 * data_out.build_patches();
1217 *
const std::string filename =
1219 * std::ofstream output(filename);
1222 * DataOutBase::VtkFlags::ZlibCompressionLevel::best_speed;
1223 * data_out.set_flags(vtk_flags);
1224 * data_out.write_vtu(output);
1232 * <a name=
"Themainfunction"></a>
1233 * <h4>The main
function</h4>
1237 * Finally the main
function. This follows the scheme of all other main
1245 *
using namespace Step15;
1247 * MinimalSurfaceProblem<2> laplace_problem_2d;
1248 * laplace_problem_2d.run();
1250 *
catch (std::exception &exc)
1252 * std::cerr << std::endl
1254 * <<
"----------------------------------------------------"
1256 * std::cerr <<
"Exception on processing: " << std::endl
1257 * << exc.what() << std::endl
1258 * <<
"Aborting!" << std::endl
1259 * <<
"----------------------------------------------------"
1266 * std::cerr << std::endl
1268 * <<
"----------------------------------------------------"
1270 * std::cerr <<
"Unknown exception!" << std::endl
1271 * <<
"Aborting!" << std::endl
1272 * <<
"----------------------------------------------------"
1279 <a name=
"Results"></a><h1>Results</h1>
1283 The output of the program looks as follows:
1285 * ******** Initial mesh ********
1286 Initial residual: 1.53143
1292 * ******** Refined mesh 1 ********
1293 Initial residual: 0.865774
1299 * ******** Refined mesh 2 ********
1300 Initial residual: 0.425581
1307 Obviously, the scheme converges,
if not very fast. We will come back to
1308 strategies
for accelerating the method below.
1310 One can visualize the solution after each
set of five Newton
1311 iterations, i.e., on each of the meshes on which we
approximate the
1312 solution. This yields the following
set of images:
1314 <div
class=
"twocolumn" style=
"width: 80%">
1316 <img src=
"https://www.dealii.org/images/steps/developer/step_15_solution_1.png"
1317 alt=
"Solution after zero cycles with contour lines." width=
"230" height=
"273">
1320 <img src=
"https://www.dealii.org/images/steps/developer/step_15_solution_2.png"
1321 alt=
"Solution after one cycle with contour lines." width=
"230" height=
"273">
1324 <img src=
"https://www.dealii.org/images/steps/developer/step_15_solution_3.png"
1325 alt=
"Solution after two cycles with contour lines." width=
"230" height=
"273">
1328 <img src=
"https://www.dealii.org/images/steps/developer/step_15_solution_4.png"
1329 alt=
"Solution after three cycles with contour lines." width=
"230" height=
"273">
1333 It is clearly visible, that the solution minimizes the surface
1334 after each refinement. The solution converges to a picture
one
1335 would imagine a soap bubble to be that is located inside a wire
loop
1337 the boundary. Also it is visible, how the boundary
1338 is smoothed out after each refinement. On the coarse mesh,
1339 the boundary doesn
't look like a sine, whereas it does the
1340 finer the mesh gets.
1342 The mesh is mostly refined near the boundary, where the solution
1343 increases or decreases strongly, whereas it is coarsened on
1344 the inside of the domain, where nothing interesting happens,
1345 because there isn't much change in the solution. The ninth
1346 solution and mesh are shown here:
1348 <div
class=
"onecolumn" style=
"width: 60%">
1350 <img src=
"https://www.dealii.org/images/steps/developer/step_15_solution_9.png"
1351 alt=
"Grid and solution of the ninth cycle with contour lines." width=
"507" height=
"507">
1357 <a name=
"extensions"></a>
1358 <a name=
"Possibilitiesforextensions"></a><h3>Possibilities
for extensions</h3>
1361 The program shows the basic structure of a solver
for a nonlinear, stationary
1362 problem. However, it does not converge particularly fast,
for good reasons:
1364 - The program
always takes a step size of 0.1. This precludes the rapid,
1365 quadratic convergence
for which Newton
's method is typically chosen.
1366 - It does not connect the nonlinear iteration with the mesh refinement
1369 Obviously, a better program would have to address these two points.
1370 We will discuss them in the following.
1373 <a name="Steplengthcontrol"></a><h4> Step length control </h4>
1376 Newton's method has two well known properties:
1377 - It does not converge from arbitrarily chosen starting points. Rather, a
1378 starting
point has to be close enough to the solution to guarantee
1379 convergence. However, we can enlarge the area from which Newton
's method
1380 converges by damping the iteration using a <i>step length</i> 0<@f$\alpha^n\le
1382 - It exhibits rapid convergence of quadratic order if (i) the step length is
1383 chosen as @f$\alpha^n=1@f$, and (ii) it does in fact converge with this choice
1386 A consequence of these two observations is that a successful strategy is to
1387 choose @f$\alpha^n<1@f$ for the initial iterations until the iterate has come
1388 close enough to allow for convergence with full step length, at which point we
1389 want to switch to @f$\alpha^n=1@f$. The question is how to choose @f$\alpha^n@f$ in an
1390 automatic fashion that satisfies these criteria.
1392 We do not want to review the literature on this topic here, but only briefly
1393 mention that there are two fundamental approaches to the problem: backtracking
1394 line search and trust region methods. The former is more widely used for
1395 partial differential equations and essentially does the following:
1396 - Compute a search direction
1397 - See if the resulting residual of @f$u^n + \alpha^n\;\delta u^n@f$ with
1398 @f$\alpha^n=1@f$ is "substantially smaller" than that of @f$u^n@f$ alone.
1399 - If so, then take @f$\alpha^n=1@f$.
1400 - If not, try whether the residual is "substantially smaller" with
1402 - If so, then take @f$\alpha^n=2/3@f$.
1403 - If not, try whether the residual is "substantially smaller" with
1404 @f$\alpha^n=(2/3)^2@f$.
1406 One can of course choose other factors @f$r, r^2, \ldots@f$ than the @f$2/3,
1407 (2/3)^2, \ldots@f$ chosen above, for @f$0<r<1@f$. It is obvious where the term
1408 "backtracking" comes from: we try a long step, but if that doesn't work we
try
1409 a shorter step, and ever shorter step, etc. The
function
1410 <code>determine_step_length()</code> is written the way it is to support
1411 exactly
this kind of use
case.
1413 Whether we accept a particular step length @f$\alpha^n@f$ depends on how we define
1414 "substantially smaller". There are a number of ways to
do so, but without
1415 going into detail let us just mention that the most common ones are to use the
1416 Wolfe and Armijo-Goldstein conditions. For these,
one can show the following:
1417 - There is
always a step length @f$\alpha^n@f$
for which the conditions are
1418 satisfied, i.e., the iteration never gets stuck as
long as the problem is
1420 - If we are close enough to the solution, then the conditions allow
for
1421 @f$\alpha^n=1@f$, thereby enabling quadratic convergence.
1423 We will not dwell on
this here any further but leave the implementation of
1424 such algorithms as an exercise. We note, however, that when implemented
1425 correctly then it is a common observation that most reasonably nonlinear
1426 problems can be solved in anywhere between 5 and 15 Newton iterations to
1427 engineering accuracy — substantially fewer than we need with the current
1428 version of the program.
1430 More details on globalization methods including backtracking can be found,
1431 for example, in Griva, Nash, Sofer: Linear and nonlinear optimization (2009).
1434 <a name=
"Integratingmeshrefinementandnonlinearandlinearsolvers"></a><h4> Integrating mesh refinement and nonlinear and linear solvers </h4>
1437 We currently
do exactly 5 iterations on each mesh. But is
this optimal? One
1438 could ask the following questions:
1439 - Maybe it is worthwhile doing more iterations on the
initial meshes since
1440 there, computations are cheap.
1441 - On the other hand, we
do not want to
do too many iterations on every mesh:
1442 yes, we could drive the residual to
zero on every mesh, but that would only
1443 mean that the nonlinear iteration error is far smaller than the
1444 discretization error.
1445 - Should we use solve the linear systems in each Newton step with higher or
1448 Ultimately, what
this boils down to is that we somehow need to couple the
1449 discretization error on the current mesh with the nonlinear residual we want
1450 to achieve with the Newton iterations on a given mesh, and to the linear
1451 iteration we want to achieve with the CG method within each Newton
1454 How to
do this is, again, not entirely trivial, and we again leave it as a
1458 <a name=
"PlainProg"></a>
1459 <h1> The plain program</h1>
1460 @include
"step-15.cc"