Numerical solution of nonlinear equations¶
Several derived equations in Similarity solutions (e.g., Eq. (45)) are nonlinear and no analytical solutions are known. Very few techniques are actually available for solving nonlinear equations analytically and as such we usually have to rely on numerical methods to obtain solutions for real fluid flows. Reynolds Averaged Navier-Stokes equations, that will be discussed thoroughly in chapter 6 of White [Whi06], are basically Navier-Stokes equations with a nonlinear viscosity coefficient. In this section we will consider two of the most important techniques for iteratively solving nonlinear equations: Picard and Newton iterations.
The general procedure for obtaining solutions to nonlinear equations is to guess an initial solution and then successively recompute new and hopefully better approximations to the solution. This is illustrated nicely with Newton’s method (also called Newton-Raphson’s method). Consider a nonlinear function \(f(x)\) of one variable \(x\) (e.g., \(f(x)=x^2-1\) or \(f(x)=x \sin(x)-1\)), where one is interested in finding the roots \(x\) such that \(f(x)=0\). Newton’s method boils down to making an initial guess \(x_0\) that does not satisfy our equation (i.e., \(f(x_0) \neq 0\)) and from this initial guess we successively compute better approximations to the final root:
That is, compute \(x_1\) from \(x_0\) and check if the solution is close enough to the root. If not, compute \(x_2\) using \(x_1\) as initial condition. Repeat until \(f(x_n)\approx 0\).
When PDEs are solved numerically we obtain through discretization a system of many equations for many variables. Newton’s method extends easily to systems of equations as well. Consider 2 equations with two unknowns written compactly as \(F(\overline{x})\)
where \(\overline{x} = (x, y)^T\). The vector \(F\) has two components (the two equations) and there are two unknowns. We can compute the derivative of \(F\) with respect to \(\overline{x}\)
The derivative is often called the Jacobian. Newton’s method for these two equations (or any system of equations) is simply
or with index notation for equation \(i\)
where the \(k\) index signals that both \(J\) and \(F\) are computed using the solution at iteration step \(k\). There is no summation on repeated \(k\)’s.
A discrete finite element solution of the function \(u\) can be written as
where \(v_i\) are the testfunctions. The solution has \(N+1\) unknowns \(u_i\), or “degrees of freedom”, that for a linear Lagrange element are the values at the vertices of the mesh. When a PDE containing both trial- and testfunctions is assembled in FEniCS a set of \(N+1\) linear equations arise for these unknowns. We will now see how nonlinear equations can be handled by FEniCS.
Nonlinear Poisson equation¶
Consider the nonlinear Poisson equation
where \(q\) and \(f\) may be nonlinear functions of \(u\). Using testfunction \(v\) and neglecting the boundary term the variational form reads
This is a nonlinear variational form and we need to solve it iteratively.
Newton’s method¶
Consider first Newton’s method for solving the nonlinear Poisson equation. We use the notation \(u^k\) for the known solution at iteration step \(k\) and \(u\) the unknown trialfunction. For Newton’s method we write the variational form solely in terms of known functions
\(F(u^k;v)\) is a linear form in that it does not contain the unknown \(u\), only the testfunction \(v\). It is a nonlinear form in terms of the known Function
\(u^k\) though. The Jacobian of \(F(u^k;v)\) is computed is FEniCS as
u_ = Function(V)
u = TrialFunction(V)
J = derivative(F, u_, u)
A complete implementation that solves the nonlinear Poisson equation on the unit interval x=[0, 1] is shown below, where the use of this J
should be obvious
from dolfin import *
mesh = UnitIntervalMesh(10)
V = FunctionSpace(mesh, 'CG', 1)
u = TrialFunction(V)
v = TestFunction(V)
bc0 = DirichletBC(V, Constant(0), "std::abs(x[0]) < 1e-10")
bc1 = DirichletBC(V, Constant(1), "std::abs(x[0]-1) < 1e-10")
def q(u):
return 1+u**4
u_ = interpolate(Expression("x[0]", degree=1), V)
F = inner(grad(v), q(u_)*grad(u_))*dx
##solve(F == 0, u_, [bc0, bc1])
J = derivative(F, u_, u)
du = Function(V)
error = 1; k = 0
while k < 100 and error > 1e-12:
A = assemble(J)
b = assemble(-F)
[bc.apply(A, b, u_.vector()) for bc in [bc0, bc1]]
solve(A, du.vector(), b)
u_.vector().axpy(1., du.vector())
error = norm(du.vector())
k += 1
print("Iteration {} Error = {}".format(k, error))
Iteration 1 Error = 0.20207598526678652
Iteration 2 Error = 0.006786044472018965
Iteration 3 Error = 1.626130021884984e-05
Iteration 4 Error = 1.1905550866645789e-10
Iteration 5 Error = 8.698631493053902e-17
Note that everything below the line with solve(F == 0, u_, [bc0, bc1])
actually can be replaced simply by using this very compact call. The part from J = derivative(F, u_, u)
and out is included here simply to illustrate how Newton’s method works in practise.
Picard iterations¶
For Picard iterations we have to linearize the variational form in Eq. (71) ourself. Let us first linearize such that the coefficient is known:
The first term on the rhs of \(F(u;v)\) is bilinear in that it contains both trial- and testfunctions \(u\) and \(v\). The equation is also linear in \(u\), which is necessary for FEniCS to accept it. Note that the function \(f\) could be linear in \(u\) as well. For example, if \(f(u) = u^2\), then we can linearize it like \(f(u) = u \cdot u^k\). Picard iterations are implemented as Newton up to the point where the form is created. The remaining part of the Picard code reads:
u_ = interpolate(Expression("x[0]", degree=1), V)
F = inner(grad(v), q(u_)*grad(u))*dx
k = 0
u_1 = Function(V)
error = 1
while k < 100 and error > 1e-12:
solve(F == Constant(0)*v*dx, u_, [bc0, bc1])
error = errornorm(u_, u_1)
u_1.assign(u_)
k += 1
print("Iteration {} Error = {}".format(k, error))
Iteration 1 Error = 0.6227202551171986
Iteration 2 Error = 0.009279110924024897
Iteration 3 Error = 0.0009571365383617912
Iteration 4 Error = 7.170672306756823e-05
Iteration 5 Error = 1.6196475252925717e-05
Iteration 6 Error = 1.6051762319950847e-06
Iteration 7 Error = 1.5867106937166712e-07
Iteration 8 Error = 3.2011735458409474e-08
Iteration 9 Error = 2.6184233711820267e-09
Iteration 10 Error = 3.617146938878996e-10
Iteration 11 Error = 6.066104630273271e-11
Iteration 12 Error = 4.281972850229132e-12
Iteration 13 Error = 8.043617961021526e-13
The Function
u_1
holds the previous solution and u_1.assign(u_)
copies all values from the newly computed u_
to u_1
.
Note that the Newton’s method requires 5 iterations to converge, whereas Picard requires 13. This is quite typical. Newton’s method is known to be very efficient when it actually finds the solution, but it often diverges if the initial guess is not close enough to the final solution. Picard is known to approach the solution more slowly, but in return it is usually more robust in that it finds the solution from a broader range of initial conditions.
- Whi06
Frank M. White. Viscous Fluid Flow. McGraw-Hill, third edition, 2006.