$$ \newcommand{\half}{\frac{1}{2}} \newcommand{\halfi}{{1/2}} \newcommand{\tp}{\thinspace .} \newcommand{\uex}{{u_{\small\mbox{e}}}} \newcommand{\uexd}[1]{{u_{\small\mbox{e}, #1}}} \newcommand{\vex}{{v_{\small\mbox{e}}}} \newcommand{\Aex}{{A_{\small\mbox{e}}}} \newcommand{\E}[1]{\hbox{E}\lbrack #1 \rbrack} \newcommand{\Var}[1]{\hbox{Var}\lbrack #1 \rbrack} \newcommand{\xpoint}{\boldsymbol{x}} \newcommand{\normalvec}{\boldsymbol{n}} \newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\x}{\boldsymbol{x}} \renewcommand{\u}{\boldsymbol{u}} \renewcommand{\v}{\boldsymbol{v}} \newcommand{\acc}{\boldsymbol{a}} \newcommand{\rpos}{\boldsymbol{r}} \newcommand{\e}{\boldsymbol{e}} \newcommand{\f}{\boldsymbol{f}} \newcommand{\F}{\boldsymbol{F}} \newcommand{\stress}{\boldsymbol{\sigma}} \newcommand{\I}{\boldsymbol{I}} \newcommand{\T}{\boldsymbol{T}} \newcommand{\q}{\boldsymbol{q}} \newcommand{\g}{\boldsymbol{g}} \newcommand{\dfc}{\alpha} % diffusion coefficient \newcommand{\ii}{\boldsymbol{i}} \newcommand{\jj}{\boldsymbol{j}} \newcommand{\kk}{\boldsymbol{k}} \newcommand{\ir}{\boldsymbol{i}_r} \newcommand{\ith}{\boldsymbol{i}_{\theta}} \newcommand{\Ix}{\mathcal{I}_x} \newcommand{\Iy}{\mathcal{I}_y} \newcommand{\Iz}{\mathcal{I}_z} \newcommand{\It}{\mathcal{I}_t} \newcommand{\setb}[1]{#1^0} % set begin \newcommand{\sete}[1]{#1^{-1}} % set end \newcommand{\setl}[1]{#1^-} \newcommand{\setr}[1]{#1^+} \newcommand{\seti}[1]{#1^i} \newcommand{\stepzero}{*} \newcommand{\stephalf}{***} \newcommand{\stepone}{**} \newcommand{\baspsi}{\psi} \newcommand{\dx}{\, \mathrm{d}x} \newcommand{\ds}{\, \mathrm{d}s} \newcommand{\Real}{\mathbb{R}} \newcommand{\Integer}{\mathbb{Z}} $$

 

 

 

Appendix: Truncation error analysis

Truncation error analysis provides a widely applicable framework for analyzing the accuracy of finite difference schemes. This type of analysis can also be used for finite element and finite volume methods if the discrete equations are written in finite difference form. The result of the analysis is an asymptotic estimate of the error in the scheme on the form \( Ch^r \), where \( h \) is a discretization parameter (\( \Delta t \), \( \Delta x \), etc.), \( r \) is a number, known as the convergence rate, and \( C \) is a constant, typically dependent on the derivatives of the exact solution.

Knowing \( r \) gives understanding of the accuracy of the scheme. But maybe even more important, a powerful verification method for computer codes is to check that the empirically observed convergence rates in experiments coincide with the theoretical value of \( r \) found from truncation error analysis.

The analysis can be carried out by hand, by symbolic software, and also numerically. All three methods will be illustrated. From examining the symbolic expressions of the truncation error we can add correction terms to the differential equations in order to increase the numerical accuracy.

In general, the term truncation error refers to the discrepancy that arises from performing a finite number of steps to approximate a process with infinitely many steps. The term is used in a number of contexts, including truncation of infinite series, finite precision arithmetic, finite differences, and differential equations. We shall be concerned with computing truncation errors arising in finite difference formulas and in finite difference discretizations of differential equations.

Overview of truncation error analysis

Abstract problem setting

Consider an abstract differential equation $$ \mathcal{L}(u)=0,$$ where \( \mathcal{L}(u) \) is some formula involving the unknown \( u \) and its derivatives. One example is \( \mathcal{L}(u)=u'(t)+a(t)u(t)-b(t) \), where \( a \) and \( b \) are constants or functions of time. We can discretize the differential equation and obtain a corresponding discrete model, here written as $$ \mathcal{L}_{\Delta}(u) =0\tp$$ The solution \( u \) of this equation is the numerical solution. To distinguish the numerical solution from the exact solution of the differential equation problem, we denote the latter by \( \uex \) and write the differential equation and its discrete counterpart as $$ \begin{align*} \mathcal{L}(\uex)&=0,\\ \mathcal{L}_\Delta (u)&=0\tp \end{align*} $$ Initial and/or boundary conditions can usually be left out of the truncation error analysis and are omitted in the following.

The numerical solution \( u \) is, in a finite difference method, computed at a collection of mesh points. The discrete equations represented by the abstract equation \( \mathcal{L}_\Delta (u)=0 \) are usually algebraic equations involving \( u \) at some neighboring mesh points.

Error measures

A key issue is how accurate the numerical solution is. The ultimate way of addressing this issue would be to compute the error \( \uex - u \) at the mesh points. This is usually extremely demanding. In very simplified problem settings we may, however, manage to derive formulas for the numerical solution \( u \), and therefore closed form expressions for the error \( \uex - u \). Such special cases can provide considerable insight regarding accuracy and stability, but the results are established for special problems.

The error \( \uex -u \) can be computed empirically in special cases where we know \( \uex \). Such cases can be constructed by the method of manufactured solutions, where we choose some exact solution \( \uex = v \) and fit a source term \( f \) in the governing differential equation \( \mathcal{L}(\uex)=f \) such that \( \uex=v \) is a solution (i.e., \( f=\mathcal{L}(v) \)). Assuming an error model of the form \( Ch^r \), where \( h \) is the discretization parameter, such as \( \Delta t \) or \( \Delta x \), one can estimate the convergence rate \( r \). This is a widely applicable procedure, but the validity of the results is, strictly speaking, tied to the chosen test problems.

Another error measure arises by asking to what extent the exact solution \( \uex \) fits the discrete equations. Clearly, \( \uex \) is in general not a solution of \( \mathcal{L}_\Delta(u)=0 \), but we can define the residual $$ R = \mathcal{L}_\Delta(\uex),$$ and investigate how close \( R \) is to zero. A small \( R \) means intuitively that the discrete equations are close to the differential equation, and then we are tempted to think that \( u^n \) must also be close to \( \uex(t_n) \).

The residual \( R \) is known as the truncation error of the finite difference scheme \( \mathcal{L}_\Delta(u)=0 \). It appears that the truncation error is relatively straightforward to compute by hand or symbolic software without specializing the differential equation and the discrete model to a special case. The resulting \( R \) is found as a power series in the discretization parameters. The leading-order terms in the series provide an asymptotic measure of the accuracy of the numerical solution method (as the discretization parameters tend to zero). An advantage of truncation error analysis, compared to empirical estimation of convergence rates, or detailed analysis of a special problem with a mathematical expression for the numerical solution, is that the truncation error analysis reveals the accuracy of the various building blocks in the numerical method and how each building block impacts the overall accuracy. The analysis can therefore be used to detect building blocks with lower accuracy than the others.

Knowing the truncation error or other error measures is important for verification of programs by empirically establishing convergence rates. The forthcoming text will provide many examples on how to compute truncation errors for finite difference discretizations of ODEs and PDEs.

Truncation errors in finite difference formulas

The accuracy of a finite difference formula is a fundamental issue when discretizing differential equations. We shall first go through a particular example in detail and thereafter list the truncation error in the most common finite difference approximation formulas.

Example: The backward difference for \( u'(t) \)

Consider a backward finite difference approximation of the first-order derivative \( u' \): $$ \begin{equation} \lbrack D_t^- u\rbrack^n = \frac{u^{n} - u^{n-1}}{\Delta t} \approx u'(t_n) \tag{7.1} \tp \end{equation} $$ Here, \( u^n \) means the value of some function \( u(t) \) at a point \( t_n \), and \( [D_t^-u]^n \) is the discrete derivative of \( u(t) \) at \( t=t_n \). The discrete derivative computed by a finite difference is, in general, not exactly equal to the derivative \( u'(t_n) \). The error in the approximation is $$ \begin{equation} R^n = [D^-_tu]^n - u'(t_n)\tp \tag{7.2} \end{equation} $$

The common way of calculating \( R^n \) is to

  1. expand \( u(t) \) in a Taylor series around the point where the derivative is evaluated, here \( t_n \),
  2. insert this Taylor series in (7.2), and
  3. collect terms that cancel and simplify the expression.
The result is an expression for \( R^n \) in terms of a power series in \( \Delta t \). The error \( R^n \) is commonly referred to as the truncation error of the finite difference formula.

The Taylor series formula often found in calculus books takes the form $$ f(x+h) = \sum_{i=0}^\infty \frac{1}{i!}\frac{d^if}{dx^i}(x)h^i\tp $$ In our application, we expand the Taylor series around the point where the finite difference formula approximates the derivative. The Taylor series of \( u^n \) at \( t_n \) is simply \( u(t_n) \), while the Taylor series of \( u^{n-1} \) at \( t_n \) must employ the general formula, $$ \begin{align*} u(t_{n-1}) = u(t-\Delta t) &= \sum_{i=0}^\infty \frac{1}{i!}\frac{d^iu}{dt^i}(t_n)(-\Delta t)^i\\ & = u(t_n) - u'(t_n)\Delta t + {\half}u''(t_n)\Delta t^2 + \Oof{\Delta t^3}, \end{align*} $$ where \( \Oof{\Delta t^3} \) means a power-series in \( \Delta t \) where the lowest power is \( \Delta t^3 \). We assume that \( \Delta t \) is small such that \( \Delta t^p \gg \Delta t^q \) if \( p \) is smaller than \( q \). The details of higher-order terms in \( \Delta t \) are therefore not of much interest. Inserting the Taylor series above in the right-hand side of1 (7.2) gives rise to some algebra: $$ \begin{align*} [D_t^-u]^n - u'(t_n) &= \frac{u(t_n) - u(t_{n-1})}{\Delta t} - u'(t_n)\\ &= \frac{u(t_n) - (u(t_n) - u'(t_n)\Delta t + {\half}u''(t_n)\Delta t^2 + \Oof{\Delta t^3} )}{\Delta t} - u'(t_n)\\ &= -{\half}u''(t_n)\Delta t + \Oof{\Delta t^2} ), \end{align*} $$ which is, according to (7.2), the truncation error: $$ \begin{equation} R^n = - {\half}u''(t_n)\Delta t + \Oof{\Delta t^2} ) \tp \tag{7.3} \end{equation} $$ The dominating term for small \( \Delta t \) is \( -{\half}u''(t_n)\Delta t \), which is proportional to \( \Delta t \), and we say that the truncation error is of first order in \( \Delta t \).

Example: The forward difference for \( u'(t) \)

We can analyze the approximation error in the forward difference $$ u'(t_n) \approx [D_t^+ u]^n = \frac{u^{n+1}-u^n}{\Delta t},$$ by writing $$ R^n = [D_t^+ u]^n - u'(t_n),$$ and expanding \( u^{n+1} \) in a Taylor series around \( t_n \), $$ u(t_{n+1}) = u(t_n) + u'(t_n)\Delta t + {\half}u''(t_n)\Delta t^2 + \Oof{\Delta t^3} \tp $$ The result becomes $$ R = {\half}u''(t_n)\Delta t + \Oof{\Delta t^2},$$ showing that also the forward difference is of first order.

Example: The central difference for \( u'(t) \)

For the central difference approximation, $$ u'(t_n)\approx [ D_tu]^n, \quad [D_tu]^n = \frac{u^{n+\half} - u^{n-\half}}{\Delta t}, $$ we write $$ R^n = [ D_tu]^n - u'(t_n),$$ and expand \( u(t_{n+\half}) \) and \( u(t_{n-\half} \) in Taylor series around the point \( t_n \) where the derivative is evaluated. We have $$ \begin{align*} u(t_{n+\half}) = &u(t_n) + u'(t_n)\half\Delta t + {\half}u''(t_n)(\half\Delta t)^2 + \\ & \frac{1}{6}u'''(t_n) (\half\Delta t)^3 + \frac{1}{24}u''''(t_n) (\half\Delta t)^4 + \\ & \frac{1}{120}u''''(t_n) (\half\Delta t)^5 + \Oof{\Delta t^6},\\ u(t_{n-\half}) = &u(t_n) - u'(t_n)\half\Delta t + {\half}u''(t_n)(\half\Delta t)^2 - \\ & \frac{1}{6}u'''(t_n) (\half\Delta t)^3 + \frac{1}{24}u''''(t_n) (\half\Delta t)^4 - \\ & \frac{1}{120}u'''''(t_n) (\half\Delta t)^5 + \Oof{\Delta t^6} \tp \end{align*} $$ Now, $$ u(t_{n+\half}) - u(t_{n-\half}) = u'(t_n)\Delta t + \frac{1}{24}u'''(t_n) \Delta t^3 + \frac{1}{960}u'''''(t_n) \Delta t^5 + \Oof{\Delta t^7} \tp $$ By collecting terms in \( [D_t u]^n - u'(t_n) \) we find the truncation error to be $$ \begin{equation} R^n = \frac{1}{24}u'''(t_n)\Delta t^2 + \Oof{\Delta t^4}, \tag{7.4} \end{equation} $$ with only even powers of \( \Delta t \). Since \( R\sim \Delta t^2 \) we say the centered difference is of second order in \( \Delta t \).

Overview of leading-order error terms in finite difference formulas

Here we list the leading-order terms of the truncation errors associated with several common finite difference formulas for the first and second derivatives. $$ \begin{align} \lbrack D_tu \rbrack^n &= \frac{u^{n+\half} - u^{n-\half}}{\Delta t} = u'(t_n) + R^n \tag{7.5},\\ R^n &= \frac{1}{24}u'''(t_n)\Delta t^2 + \Oof{\Delta t^4} \tag{7.6}\\ \lbrack D_{2t}u \rbrack^n &= \frac{u^{n+1} - u^{n-1}}{2\Delta t} = u'(t_n) + R^n \tag{7.7},\\ R^n &= \frac{1}{6}u'''(t_n)\Delta t^2 + \Oof{\Delta t^4} \tag{7.8}\\ \lbrack D_t^-u \rbrack^n &= \frac{u^{n} - u^{n-1}}{\Delta t} = u'(t_n) + R^n \tag{7.9},\\ R^n &= -{\half}u''(t_n)\Delta t + \Oof{\Delta t^2} \tag{7.10}\\ \lbrack D_t^+u \rbrack^n &= \frac{u^{n+1} - u^{n}}{\Delta t} = u'(t_n) + R^n \tag{7.11},\\ R^n &= {\half}u''(t_n)\Delta t + \Oof{\Delta t^2} \tag{7.12}\\ [\bar D_tu]^{n+\theta} &= \frac{u^{n+1} - u^{n}}{\Delta t} = u'(t_{n+\theta}) + R^{n+\theta} \tag{7.13},\\ R^{n+\theta} &= \half(1-2\theta)u''(t_{n+\theta})\Delta t - \frac{1}{6}((1 - \theta)^3 - \theta^3)u'''(t_{n+\theta})\Delta t^2 + \Oof{\Delta t^3} \tag{7.14}\\ \lbrack D_t^{2-}u \rbrack^n &= \frac{3u^{n} - 4u^{n-1} + u^{n-2}}{2\Delta t} = u'(t_n) + R^n \tag{7.15},\\ R^n &= -\frac{1}{3}u'''(t_n)\Delta t^2 + \Oof{\Delta t^3} \tag{7.16}\\ \lbrack D_tD_t u \rbrack^n &= \frac{u^{n+1} - 2u^{n} + u^{n-1}}{\Delta t^2} = u''(t_n) + R^n \tag{7.17},\\ R^n &= \frac{1}{12}u''''(t_n)\Delta t^2 + \Oof{\Delta t^4} \tag{7.18} \end{align} $$

It will also be convenient to have the truncation errors for various means or averages. The weighted arithmetic mean leads to $$ \begin{align} [\overline{u}^{t,\theta}]^{n+\theta} & = \theta u^{n+1} + (1-\theta)u^n = u(t_{n+\theta}) + R^{n+\theta}, \tag{7.19}\\ R^{n+\theta} &= {\half}u''(t_{n+\theta})\Delta t^2\theta (1-\theta) + \Oof{\Delta t^3} \tp \tag{7.20} \end{align} $$ The standard arithmetic mean follows from this formula when \( \theta=\half \). Expressed at point \( t_n \) we get $$ \begin{align} [\overline{u}^{t}]^{n} &= \half(u^{n-\half} + u^{n+\half}) = u(t_n) + R^{n}, \tag{7.21}\\ R^{n} &= \frac{1}{8}u''(t_{n})\Delta t^2 + \frac{1}{384}u''''(t_n)\Delta t^4 + \Oof{\Delta t^6}\tp \tag{7.22} \end{align} $$

The geometric mean also has an error \( \Oof{\Delta t^2} \): $$ \begin{align} [\overline{u^2}^{t,g}]^{n} &= u^{n-\half}u^{n+\half} = (u^n)^2 + R^n, \tag{7.23}\\ R^n &= - \frac{1}{4}u'(t_n)^2\Delta t^2 + \frac{1}{4}u(t_n)u''(t_n)\Delta t^2 + \Oof{\Delta t^4} \tp \tag{7.24} \end{align} $$ The harmonic mean is also second-order accurate: $$ \begin{align} [\overline{u}^{t,h}]^{n} &= u^n = \frac{2}{\frac{1}{u^{n-\half}} + \frac{1}{u^{n+\half}}} + R^{n+\half}, \tag{7.25}\\ R^n &= - \frac{u'(t_n)^2}{4u(t_n)}\Delta t^2 + \frac{1}{8}u''(t_n)\Delta t^2 \tp \tag{7.26} \end{align} $$

Software for computing truncation errors

We can use sympy to aid calculations with Taylor series. The derivatives can be defined as symbols, say D3f for the 3rd derivative of some function \( f \). A truncated Taylor series can then be written as f + D1f*h + D2f*h**2/2. The following class takes some symbol f for the function in question and makes a list of symbols for the derivatives. The __call__ method computes the symbolic form of the series truncated at num_terms terms.

import sympy as sym

class TaylorSeries:
    """Class for symbolic Taylor series."""
    def __init__(self, f, num_terms=4):
        self.f = f
        self.N = num_terms
        # Introduce symbols for the derivatives
        self.df = [f]
        for i in range(1, self.N+1):
            self.df.append(sym.Symbol('D%d%s' % (i, f.name)))

    def __call__(self, h):
        """Return the truncated Taylor series at x+h."""
        terms = self.f
        for i in range(1, self.N+1):
            terms += sym.Rational(1, sym.factorial(i))*self.df[i]*h**i
        return terms

We may, for example, use this class to compute the truncation error of the Forward Euler finite difference formula:

>>> from truncation_errors import TaylorSeries
>>> from sympy import *
>>> u, dt = symbols('u dt')
>>> u_Taylor = TaylorSeries(u, 4)
>>> u_Taylor(dt)
D1u*dt + D2u*dt**2/2 + D3u*dt**3/6 + D4u*dt**4/24 + u
>>> FE = (u_Taylor(dt) - u)/dt
>>> FE
(D1u*dt + D2u*dt**2/2 + D3u*dt**3/6 + D4u*dt**4/24)/dt
>>> simplify(FE)
D1u + D2u*dt/2 + D3u*dt**2/6 + D4u*dt**3/24

The truncation error consists of the terms after the first one (\( u' \)).

The module file trunc/truncation_errors.py contains another class DiffOp with symbolic expressions for most of the truncation errors listed in the previous section. For example:

>>> from truncation_errors import DiffOp
>>> from sympy import *
>>> u = Symbol('u')
>>> diffop = DiffOp(u, independent_variable='t')
>>> diffop['geometric_mean']
-D1u**2*dt**2/4 - D1u*D3u*dt**4/48 + D2u**2*dt**4/64 + ...
>>> diffop['Dtm']
D1u + D2u*dt/2 + D3u*dt**2/6 + D4u*dt**3/24
>>> >>> diffop.operator_names()
['geometric_mean', 'harmonic_mean', 'Dtm', 'D2t', 'DtDt',
 'weighted_arithmetic_mean', 'Dtp', 'Dt']

The indexing of diffop applies names that correspond to the operators: Dtp for \( D^+_t \), Dtm for \( D_t^- \), Dt for \( D_t \), D2t for \( D_{2t} \), DtDt for \( D_tD_t \).