# Exercises¶

## Problem 5.1: Determine if equations are nonlinear or not¶

Classify each term in the following equations as linear or nonlinear. Assume that \(u\), \(\boldsymbol{u}\), and \(p\) are unknown functions and that all other symbols are known quantities.

- \(mu^{\prime\prime} + \beta |u^{\prime}|u^{\prime} + cu = F(t)\)
- \(u_t = {\alpha} u_{xx}\)
- \(u_{tt} = c^2\nabla^2 u\)
- \(u_t = \nabla\cdot({\alpha}(u)\nabla u) + f(x,y)\)
- \(u_t + f(u)_x = 0\)
- \(\boldsymbol{u}_t + \boldsymbol{u}\cdot\nabla \boldsymbol{u} = -\nabla p + r\nabla^2\boldsymbol{u}\), \(\nabla\cdot\boldsymbol{u} = 0\) (\(\boldsymbol{u}\) is a vector field)
- \(u^{\prime} = f(u,t)\)
- \(\nabla^2 u = \lambda e^u\)

Filename: `nonlinear_vs_linear`

.

## Problem 5.2: Derive and investigate a generalized logistic model¶

The logistic model for population growth is derived by assuming a nonlinear growth rate,

and the logistic model arises from the simplest possible choice of \(a(u)\): \(r(u)=\varrho(1 - u/M)\), where \(M\) is the maximum value of \(u\) that the environment can sustain, and \(\varrho\) is the growth under unlimited access to resources (as in the beginning when \(u\) is small). The idea is that \(a(u)\sim\varrho\) when \(u\) is small and that \(a(t)\rightarrow 0\) as \(u\rightarrow M\).

An \(a(u)\) that generalizes the linear choice is the polynomial form

where \(p>0\) is some real number.

**a)**
Formulate a Forward Euler, Backward Euler, and a Crank-Nicolson
scheme for (601).

**Hint.**
Use a geometric mean approximation in the Crank-Nicolson scheme:
\([a(u)u]^{n+1/2}\approx a(u^n)u^{n+1}\).

**b)**
Formulate Picard and Newton iteration for the Backward Euler scheme in a).

**c)**
Implement the numerical solution methods from a) and b).
Use logistic.py to compare the case
\(p=1\) and the choice (602).

**d)**
Implement unit tests that check the asymptotic limit of the solutions:
\(u\rightarrow M\) as \(t\rightarrow\infty\).

**Hint.**
You need to experiment to find what “infinite time” is
(increases substantially with \(p\)) and what the
appropriate tolerance is for testing the asymptotic limit.

**e)**
Perform experiments with Newton and Picard iteration for
the model (602).
See how sensitive
the number of iterations is to \(\Delta t\) and \(p\).

Filename: `logistic_p`

.

## Problem 5.3: Experience the behavior of Newton’s method¶

The program Newton_demo.py illustrates graphically each step in Newton’s method and is run like

```
Terminal> python Newton_demo.py f dfdx x0 xmin xmax
```

Use this program to investigate potential problems with Newton’s method when solving \(e^{-0.5x^2}\cos (\pi x)=0\). Try a starting point \(x_0=0.8\) and \(x_0=0.85\) and watch the different behavior. Just run

```
Terminal> python Newton_demo.py '0.2 + exp(-0.5*x**2)*cos(pi*x)' \
'-x*exp(-x**2)*cos(pi*x) - pi*exp(-x**2)*sin(pi*x)' \
0.85 -3 3
```

and repeat with 0.85 replaced by 0.8.

## Exercise 5.4: Compute the Jacobian of a \(2\times 2\) system¶

Write up the system (537)-(538) in the form \(F(u)=0\), \(F=(F_0,F_1)\), \(u=(u_0,u_1)\), and compute the Jacobian \(J_{i,j}=\partial F_i/\partial u_j\).

## Problem 5.5: Solve nonlinear equations arising from a vibration ODE¶

Consider a nonlinear vibration problem

where \(m>0\) is a constant, \(b\geq 0\) is a constant, \(s(u)\) a possibly nonlinear function of \(u\), and \(F(t)\) is a prescribed function. Such models arise from Newton’s second law of motion in mechanical vibration problems where \(s(u)\) is a spring or restoring force, \(mu^{\prime\prime}\) is mass times acceleration, and \(bu^{\prime}|u^{\prime}|\) models water or air drag.

**a)**
Rewrite the equation for \(u\) as a system of two first-order ODEs, and
discretize this system by a Crank-Nicolson (centered difference)
method. With \(v=u^\prime\), we get a nonlinear term
\(v^{n+\frac{1}{2}}|v^{n+\frac{1}{2}}|\). Use a geometric
average for \(v^{n+\frac{1}{2}}\).

**b)**
Formulate a Picard iteration method to solve the system of nonlinear
algebraic equations.

**c)**
Explain how to apply Newton’s method to solve the nonlinear equations
at each time level. Derive expressions for the Jacobian and the
right-hand side in each Newton iteration.

Filename: `nonlin_vib`

.

## Exercise 5.6: Find the truncation error of arithmetic mean of products¶

In the section Crank-Nicolson discretization we introduce alternative arithmetic means of a product. Say the product is \(P(t)Q(t)\) evaluated at \(t=t_{n+\frac{1}{2}}\). The exact value is

There are two obvious candidates for evaluating \([PQ]^{n+\frac{1}{2}}\) as a mean of values of \(P\) and \(Q\) at \(t_n\) and \(t_{n+1}\). Either we can take the arithmetic mean of each factor \(P\) and \(Q\),

or we can take the arithmetic mean of the product \(PQ\):

The arithmetic average of \(P(t_{n+\frac{1}{2}})\) is \({\mathcal{O}(\Delta t^2)}\):

A fundamental question is whether (604) and (605) have different orders of accuracy in \(\Delta t = t_{n+1}-t_n\). To investigate this question, expand quantities at \(t_{n+1}\) and \(t_n\) in Taylor series around \(t_{n+\frac{1}{2}}\), and subtract the true value \([PQ]^{n+\frac{1}{2}}\) from the approximations (604) and (605) to see what the order of the error terms are.

**Hint.**
You may explore `sympy`

for carrying out the tedious calculations.
A general Taylor series expansion of \(P(t+\frac{1}{2}\Delta t)\) around \(t\)
involving just a general function \(P(t)\) can be
created as follows:

```
>>> from sympy import *
>>> t, dt = symbols('t dt')
>>> P = symbols('P', cls=Function)
>>> P(t).series(t, 0, 4)
P(0) + t*Subs(Derivative(P(_x), _x), (_x,), (0,)) +
t**2*Subs(Derivative(P(_x), _x, _x), (_x,), (0,))/2 +
t**3*Subs(Derivative(P(_x), _x, _x, _x), (_x,), (0,))/6 + O(t**4)
>>> P_p = P(t).series(t, 0, 4).subs(t, dt/2)
>>> P_p
P(0) + dt*Subs(Derivative(P(_x), _x), (_x,), (0,))/2 +
dt**2*Subs(Derivative(P(_x), _x, _x), (_x,), (0,))/8 +
dt**3*Subs(Derivative(P(_x), _x, _x, _x), (_x,), (0,))/48 + O(dt**4)
```

The error of the arithmetic mean, \(\frac{1}{2}(P(-\frac{1}{2}\Delta t) + P(-\frac{1}{2}\Delta t))\) for \(t=0\) is then

```
>>> P_m = P(t).series(t, 0, 4).subs(t, -dt/2)
>>> mean = Rational(1,2)*(P_m + P_p)
>>> error = simplify(expand(mean) - P(0))
>>> error
dt**2*Subs(Derivative(P(_x), _x, _x), (_x,), (0,))/8 + O(dt**4)
```

Use these examples to investigate the error of
(604) and
(605) for \(n=0\). (Choosing \(n=0\)
is necessary for not making the expressions too complicated for `sympy`

,
but there is of course no lack of generality by using \(n=0\) rather
than an arbitrary \(n\) - the main point is the product and addition
of Taylor series.)

Filename: `product_arith_mean`

.

## Problem 5.7: Newton’s method for linear problems¶

Suppose we have a linear system \(F(u) = Au- b=0\). Apply Newton’s method
to this system, and show that the method converges in one iteration.
Filename: `Newton_linear`

.

## Problem 5.8: Discretize a 1D problem with a nonlinear coefficient¶

We consider the problem

Discretize (606) by a centered
finite difference method on a uniform mesh.
Filename: `nonlin_1D_coeff_discretize`

.

## Problem 5.9: Linearize a 1D problem with a nonlinear coefficient¶

We have a two-point boundary value problem

**a)**
Construct a Picard iteration method for (607)
without discretizing in space.

**b)**
Apply Newton’s method to (607)
without discretizing in space.

**c)**
Discretize (607) by a centered finite
difference scheme. Construct a Picard method for the resulting
system of nonlinear algebraic equations.

**d)**
Discretize (607) by a centered finite
difference scheme. Define the system of nonlinear algebraic equations,
calculate the Jacobian, and set up Newton’s method for solving the system.

Filename: `nonlin_1D_coeff_linearize`

.

## Problem 5.10: Finite differences for the 1D Bratu problem¶

We address the so-called Bratu problem

where \(\lambda\) is a given parameter and \(u\) is a function of \(x\). This is a widely used model problem for studying numerical methods for nonlinear differential equations. The problem (608) has an exact solution

where \(\theta\) solves

There are two solutions of (608) for \(0<\lambda <\lambda_c\) and no solution for \(\lambda >\lambda_c\). For \(\lambda = \lambda_c\) there is one unique solution. The critical value \(\lambda_c\) solves

A numerical value is \(\lambda_c = 3.513830719\).

**a)**
Discretize (608) by a
centered finite difference method.

**b)**
Set up the nonlinear equations \(F_i(u_0,u_1,\ldots,u_{N_x})=0\)
from a). Calculate the associated Jacobian.

**c)**
Implement a solver that can compute \(u(x)\) using Newton’s method.
Plot the error as a function of \(x\) in each iteration.

**d)**
Investigate whether Newton’s method gives second-order convergence
by computing
\(|| {u_{\small\mbox{e}}} - u||/||{u_{\small\mbox{e}}} - u^{-}||^2\)
in each iteration, where \(u\) is solution in the current iteration and
\(u^{-}\) is the solution in the previous iteration.

Filename: `nonlin_1D_Bratu_fd`

.

## Problem 5.11: Discretize a nonlinear 1D heat conduction PDE by finite differences¶

We address the 1D heat conduction PDE

for \(x\in [0,L]\), where \(\varrho\) is the density of the solid material, \(c(T)\) is the heat capacity, \(T\) is the temperature, and \(k(T)\) is the heat conduction coefficient. \(T(x,0)=I(x)\), and ends are subject to a cooling law:

where \(h(T)\) is a heat transfer coefficient and \(T_s\) is the given surrounding temperature.

**a)**
Discretize this PDE in time using either a
Backward Euler or Crank-Nicolson scheme.

**b)**
Formulate a Picard iteration method for the time-discrete problem
(i.e., an iteration method before discretizing in space).

**c)**
Formulate a Newton method for the time-discrete problem in b).

**d)**
Discretize the PDE by a finite difference method in space.
Derive the matrix and right-hand side of a Picard iteration method applied
to the space-time discretized PDE.

**e)**
Derive the matrix and right-hand side of a Newton method applied
to the discretized PDE in d).

Filename: `nonlin_1D_heat_FD`

.

## Problem 5.12: Differentiate a highly nonlinear term¶

The operator \(\nabla\cdot({\alpha}(u)\nabla u)\) with \({\alpha}(u) = |\nabla u|^q\) appears in several physical problems, especially flow of Non-Newtonian fluids. The expression \(|\nabla u|\) is defined as the Euclidean norm of a vector: \(|\nabla u|^2 = \nabla u \cdot \nabla u\). In a Newton method one has to carry out the differentiation \(\partial{\alpha}(u)/\partial c_j\), for \(u=\sum_kc_k{\psi}_k\). Show that

Filename: `nonlin_differentiate`

.

## Exercise 5.13: Crank-Nicolson for a nonlinear 3D diffusion equation¶

Redo the section Finite difference discretization when a Crank-Nicolson scheme is used to discretize the equations in time and the problem is formulated for three spatial dimensions.

**Hint.**
Express the Jacobian as \(J_{i,j,k,r,s,t} = \partial F_{i,j,k}/\partial u_{r,s,t}\) and observe, as in the 2D case, that \(J_{i,j,k,r,s,t}\) is very sparse:
\(J_{i,j,k,r,s,t}\neq 0\) only for \(r=i\pm i\), \(s=j\pm 1\), and \(t=k\pm 1\)
as well as \(r=i\), \(s=j\), and \(t=k\).

Filename: `nonlin_heat_FD_CN_2D`

.

## Problem 5.14: Find the sparsity of the Jacobian¶

Consider a typical nonlinear Laplace term like \(\nabla\cdot{\alpha}(u)\nabla u\) discretized by centered finite differences. Explain why the Jacobian corresponding to this term has the same sparsity pattern as the matrix associated with the corresponding linear term \({\alpha}\nabla^2 u\).

**Hint.**
Set up the unknowns that enter the difference equation at a
point \((i,j)\) in 2D or \((i,j,k)\) in 3D, and identify the
nonzero entries of the Jacobian that can arise from such a type
of difference equation.

Filename: `nonlin_sparsity_Jacobian`

.

## Problem 5.15: Investigate a 1D problem with a continuation method¶

Flow of a pseudo-plastic power-law fluid between two flat plates can be modeled by

where \(\beta>0\) and \(\mu_0>0\) are constants. A target value of \(n\) may be \(n=0.2\).

**a)**
Formulate a Picard iteration method directly for the differential
equation problem.

**b)**
Perform a finite difference discretization of the problem in
each Picard iteration. Implement a solver that can compute \(u\)
on a mesh. Verify that the solver gives an exact solution for \(n=1\)
on a uniform mesh regardless of the cell size.

**c)**
Given a sequence of decreasing \(n\) values, solve the problem for each
\(n\) using the solution for the previous \(n\) as initial guess for
the Picard iteration. This is called a continuation method.
Experiment with \(n=(1,0.6,0.2)\) and \(n=(1,0.9,0.8,\ldots,0.2)\)
and make a table of the number of Picard iterations versus \(n\).

**d)**
Derive a Newton method at the differential equation level and
discretize the resulting linear equations in each Newton iteration
with the finite difference method.

**e)**
Investigate if Newton’s method has better convergence properties than
Picard iteration, both in combination with a continuation method.