$$ \newcommand{\half}{\frac{1}{2}} \newcommand{\halfi}{{1/2}} \newcommand{\tp}{\thinspace .} \newcommand{\uex}{{u_{\small\mbox{e}}}} \newcommand{\uexd}[1]{{u_{\small\mbox{e}, #1}}} \newcommand{\vex}{{v_{\small\mbox{e}}}} \newcommand{\Aex}{{A_{\small\mbox{e}}}} \newcommand{\E}[1]{\hbox{E}\lbrack #1 \rbrack} \newcommand{\Var}[1]{\hbox{Var}\lbrack #1 \rbrack} \newcommand{\xpoint}{\boldsymbol{x}} \newcommand{\normalvec}{\boldsymbol{n}} \newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\x}{\boldsymbol{x}} \renewcommand{\u}{\boldsymbol{u}} \renewcommand{\v}{\boldsymbol{v}} \newcommand{\acc}{\boldsymbol{a}} \newcommand{\rpos}{\boldsymbol{r}} \newcommand{\e}{\boldsymbol{e}} \newcommand{\f}{\boldsymbol{f}} \newcommand{\F}{\boldsymbol{F}} \newcommand{\stress}{\boldsymbol{\sigma}} \newcommand{\I}{\boldsymbol{I}} \newcommand{\T}{\boldsymbol{T}} \newcommand{\q}{\boldsymbol{q}} \newcommand{\g}{\boldsymbol{g}} \newcommand{\dfc}{\alpha} % diffusion coefficient \newcommand{\ii}{\boldsymbol{i}} \newcommand{\jj}{\boldsymbol{j}} \newcommand{\kk}{\boldsymbol{k}} \newcommand{\ir}{\boldsymbol{i}_r} \newcommand{\ith}{\boldsymbol{i}_{\theta}} \newcommand{\Ix}{\mathcal{I}_x} \newcommand{\Iy}{\mathcal{I}_y} \newcommand{\Iz}{\mathcal{I}_z} \newcommand{\It}{\mathcal{I}_t} \newcommand{\setb}[1]{#1^0} % set begin \newcommand{\sete}[1]{#1^{-1}} % set end \newcommand{\setl}[1]{#1^-} \newcommand{\setr}[1]{#1^+} \newcommand{\seti}[1]{#1^i} \newcommand{\stepzero}{*} \newcommand{\stephalf}{***} \newcommand{\stepone}{**} \newcommand{\baspsi}{\psi} \newcommand{\dx}{\, \mathrm{d}x} \newcommand{\ds}{\, \mathrm{d}s} \newcommand{\Real}{\mathbb{R}} \newcommand{\Integer}{\mathbb{Z}} $$

 

 

 

Systems of nonlinear algebraic equations

Implicit time discretization methods for a system of ODEs, or a PDE, lead to systems of nonlinear algebraic equations, written compactly as $$ F(u) = 0,$$ where \( u \) is a vector of unknowns \( u=(u_0,\ldots,u_N) \), and \( F \) is a vector function: \( F=(F_0,\ldots,F_N) \). The system at the end of the section Systems of ODEs fits this notation with \( N=2 \), \( F_0(u) \) given by the left-hand side of (5.18), while \( F_1(u) \) is the left-hand side of (5.19).

Sometimes the equation system has a special structure because of the underlying problem, e.g., $$ A(u)u = b(u),$$ with \( A(u) \) as an \( (N+1)\times (N+1) \) matrix function of \( u \) and \( b \) as a vector function: \( b=(b_0,\ldots,b_N) \).

We shall next explain how Picard iteration and Newton's method can be applied to systems like \( F(u)=0 \) and \( A(u)u=b(u) \). The exposition has a focus on ideas and practical computations. More theoretical considerations, including quite general results on convergence properties of these methods, can be found in Kelley [18].

Picard iteration

We cannot apply Picard iteration to nonlinear equations unless there is some special structure. For the commonly arising case \( A(u)u=b(u) \) we can linearize the product \( A(u)u \) to \( A(u^{-})u \) and \( b(u) \) as \( b(u^{-}) \). That is, we use the most previously computed approximation in \( A \) and \( b \) to arrive at a linear system for \( u \): $$ A(u^{-})u = b(u^{-})\tp$$ A relaxed iteration takes the form $$ A(u^{-})u^* = b(u^{-}),\quad u = \omega u^* + (1-\omega)u^{-}\tp$$ In other words, we solve a system of nonlinear algebraic equations as a sequence of linear systems.

Algorithm for relaxed Picard iteration. Given \( A(u)u=b(u) \) and an initial guess \( u^{-} \), iterate until convergence:
  1. solve \( A(u^{-})u^* = b(u^{-}) \) with respect to \( u^* \)
  2. \( u = \omega u^* + (1-\omega) u^{-} \)
  3. \( u^{-}\ \leftarrow\ u \)

"Until convergence" means that the iteration is stopped when the change in the unknown, \( ||u - u^{-}|| \), or the residual \( ||A(u)u-b|| \), is sufficiently small, see the section Stopping criteria for more details.

Newton's method

The natural starting point for Newton's method is the general nonlinear vector equation \( F(u)=0 \). As for a scalar equation, the idea is to approximate \( F \) around a known value \( u^{-} \) by a linear function \( \hat F \), calculated from the first two terms of a Taylor expansion of \( F \). In the multi-variate case these two terms become $$ F(u^{-}) + J(u^{-}) \cdot (u - u^{-}),$$ where \( J \) is the Jacobian of \( F \), defined by $$ J_{i,j} = \frac{\partial F_i}{\partial u_j}\tp$$ So, the original nonlinear system is approximated by $$ \hat F(u) = F(u^{-}) + J(u^{-}) \cdot (u - u^{-})=0,$$ which is linear in \( u \) and can be solved in a two-step procedure: first solve \( J\delta u = -F(u^{-}) \) with respect to the vector \( \delta u \) and then update \( u = u^{-} + \delta u \). A relaxation parameter can easily be incorporated: $$ u = \omega(u^{-} +\delta u) + (1-\omega)u^{-} = u^{-} + \omega\delta u\tp $$

Algorithm for Newton's method. Given \( F(u)=0 \) and an initial guess \( u^{-} \), iterate until convergence:
  1. solve \( J\delta u = -F(u^{-}) \) with respect to \( \delta u \)
  2. \( u = u^{-} + \omega\delta u \)
  3. \( u^{-}\ \leftarrow\ u \)

For the special system with structure \( A(u)u=b(u) \), $$ F_i = \sum_k A_{i,k}(u)u_k - b_i(u),$$ one gets $$ \begin{equation} J_{i,j} = \sum_k \frac{\partial A_{i,k}}{\partial u_j}u_k + A_{i,j} - \frac{\partial b_i}{\partial u_j}\tp \tag{5.20} \end{equation} $$ We realize that the Jacobian needed in Newton's method consists of \( A(u^{-}) \) as in the Picard iteration plus two additional terms arising from the differentiation. Using the notation \( A^{\prime}(u) \) for \( \partial A/\partial u \) (a quantity with three indices: \( \partial A_{i,k}/\partial u_j \)), and \( b^{\prime}(u) \) for \( \partial b/\partial u \) (a quantity with two indices: \( \partial b_i/\partial u_j \)), we can write the linear system to be solved as $$ (A + A^{\prime}u + b^{\prime})\delta u = -Au + b,$$ or $$ (A(u^{-}) + A^{\prime}(u^{-})u^{-} + b^{\prime}(u^{-}))\delta u = -A(u^{-})u^{-} + b(u^{-})\tp$$ Rearranging the terms demonstrates the difference from the system solved in each Picard iteration: $$ \underbrace{A(u^{-})(u^{-}+\delta u) - b(u^{-})}_{\hbox{Picard system}} +\, \gamma (A^{\prime}(u^{-})u^{-} + b^{\prime}(u^{-}))\delta u = 0\tp$$ Here we have inserted a parameter \( \gamma \) such that \( \gamma=0 \) gives the Picard system and \( \gamma=1 \) gives the Newton system. Such a parameter can be handy in software to easily switch between the methods.

Combined algorithm for Picard and Newton iteration. Given \( A(u) \), \( b(u) \), and an initial guess \( u^{-} \), iterate until convergence:
  1. solve \( (A + \gamma(A^{\prime}(u^{-})u^{-} + b^{\prime}(u^{-})))\delta u = -A(u^{-})u^{-} + b(u^{-}) \) with respect to \( \delta u \)
  2. \( u = u^{-} + \omega\delta u \)
  3. \( u^{-}\ \leftarrow\ u \)
\( \gamma =1 \) gives a Newton method while \( \gamma =0 \) corresponds to Picard iteration.

Stopping criteria

Let \( ||\cdot|| \) be the standard Euclidean vector norm. Four termination criteria are much in use:

To prevent divergent iterations to run forever, one terminates the iterations when the current number of iterations \( k \) exceeds a maximum value \( k_{\max} \).

The relative criteria are most used since they are not sensitive to the characteristic size of \( u \). Nevertheless, the relative criteria can be misleading when the initial start value for the iteration is very close to the solution, since an unnecessary reduction in the error measure is enforced. In such cases the absolute criteria work better. It is common to combine the absolute and relative measures of the size of the residual, as in $$ \begin{equation} ||F(u)|| \leq \epsilon_{rr} ||F(u_0)|| + \epsilon_{ra}, \tag{5.21} \end{equation} $$ where \( \epsilon_{rr} \) is the tolerance in the relative criterion and \( \epsilon_{ra} \) is the tolerance in the absolute criterion. With a very good initial guess for the iteration (typically the solution of a differential equation at the previous time level), the term \( ||F(u_0)|| \) is small and \( \epsilon_{ra} \) is the dominating tolerance. Otherwise, \( \epsilon_{rr} ||F(u_0)|| \) and the relative criterion dominates.

With the change in solution as criterion we can formulate a combined absolute and relative measure of the change in the solution: $$ \begin{equation} ||\delta u|| \leq \epsilon_{ur} ||u_0|| + \epsilon_{ua}, \tag{5.22} \end{equation} $$

The ultimate termination criterion, combining the residual and the change in solution with a test on the maximum number of iterations, can be expressed as $$ \begin{equation} ||F(u)|| \leq \epsilon_{rr} ||F(u_0)|| + \epsilon_{ra} \quad\hbox{or}\quad ||\delta u|| \leq \epsilon_{ur} ||u_0|| + \epsilon_{ua} \quad\hbox{or}\quad k>k_{\max}\tp \tag{5.23} \end{equation} $$

Example: A nonlinear ODE model from epidemiology

The simplest model spreading of a disease, such as a flu, takes the form of a \( 2\times 2 \) ODE system $$ \begin{align} S^{\prime} &= -\beta SI, \tag{5.24}\\ I^{\prime} &= \beta SI - \nu I, \tag{5.25} \end{align} $$ where \( S(t) \) is the number of people who can get ill (susceptibles) and \( I(t) \) is the number of people who are ill (infected). The constants \( \beta >0 \) and \( \nu >0 \) must be given along with initial conditions \( S(0) \) and \( I(0) \).

Implicit time discretization

A Crank-Nicolson scheme leads to a \( 2\times 2 \) system of nonlinear algebraic equations in the unknowns \( S^{n+1} \) and \( I^{n+1} \): $$ \begin{align} \frac{S^{n+1}-S^n}{\Delta t} &= -\beta [SI]^{n+\half} \approx -\frac{\beta}{2}(S^nI^n + S^{n+1}I^{n+1}), \tag{5.26}\\ \frac{I^{n+1}-I^n}{\Delta t} &= \beta [SI]^{n+\half} - \nu I^{n+\half} \approx \frac{\beta}{2}(S^nI^n + S^{n+1}I^{n+1}) - \frac{\nu}{2}(I^n + I^{n+1})\tp \tag{5.27} \end{align} $$ Introducing \( S \) for \( S^{n+1} \), \( S^{(1)} \) for \( S^n \), \( I \) for \( I^{n+1} \), \( I^{(1)} \) for \( I^n \), we can rewrite the system as $$ \begin{align} F_S(S,I) &= S - S^{(1)} + \half\Delta t\beta(S^{(1)}I^{(1)} + SI) = 0, \tag{5.28}\\ F_I(S,I) &= I - I^{(1)} - \half\Delta t\beta(S^{(1)}I^{(1)} + SI) + \half\Delta t\nu(I^{(1)} + I) =0\tp \tag{5.29} \end{align} $$

A Picard iteration

We assume that we have approximations \( S^{-} \) and \( I^{-} \) to \( S \) and \( I \), respectively. A way of linearizing the only nonlinear term \( SI \) is to write \( I^{-}S \) in the \( F_S=0 \) equation and \( S^{-}I \) in the \( F_I=0 \) equation, which also decouples the equations. Solving the resulting linear equations with respect to the unknowns \( S \) and \( I \) gives $$ \begin{align*} S &= \frac{S^{(1)} - \half\Delta t\beta S^{(1)}I^{(1)}} {1 + \half\Delta t\beta I^{-}}, \\ I &= \frac{I^{(1)} + \half\Delta t\beta S^{(1)}I^{(1)} - \half\Delta t\nu I^{(1)}} {1 - \half\Delta t\beta S^{-} + \half\Delta t \nu}\tp \end{align*} $$ Before a new iteration, we must update \( S^{-}\ \leftarrow\ S \) and \( I^{-}\ \leftarrow\ I \).

Newton's method

The nonlinear system (5.28)-(5.29) can be written as \( F(u)=0 \) with \( F=(F_S,F_I) \) and \( u=(S,I) \). The Jacobian becomes $$ J = \left(\begin{array}{cc} \frac{\partial}{\partial S} F_S & \frac{\partial}{\partial I}F_S\\ \frac{\partial}{\partial S} F_I & \frac{\partial}{\partial I}F_I \end{array}\right) = \left(\begin{array}{cc} 1 + \half\Delta t\beta I & \half\Delta t\beta S\\ - \half\Delta t\beta I & 1 - \half\Delta t\beta S + \half\Delta t\nu \end{array}\right) \tp $$ The Newton system \( J(u^{-})\delta u = -F(u^{-}) \) to be solved in each iteration is then $$ \begin{align*} & \left(\begin{array}{cc} 1 + \half\Delta t\beta I^{-} & \half\Delta t\beta S^{-}\\ - \half\Delta t\beta I^{-} & 1 - \half\Delta t\beta S^{-} + \half\Delta t\nu \end{array}\right) \left(\begin{array}{c} \delta S\\ \delta I \end{array}\right) =\\ & \qquad\qquad \left(\begin{array}{c} S^{-} - S^{(1)} + \half\Delta t\beta(S^{(1)}I^{(1)} + S^{-}I^{-})\\ I^{-} - I^{(1)} - \half\Delta t\beta(S^{(1)}I^{(1)} + S^{-}I^{-}) + \half\Delta t\nu(I^{(1)} + I^{-}) \end{array}\right) \end{align*} $$

Remark. For this particular system of ODEs, explicit time integration methods work very well. Even a Forward Euler scheme is fine, but (as also experienced more generally) the 4-th order Runge-Kutta method is an excellent balance between high accuracy, high efficiency, and simplicity.