This chapter is taken from the book A Primer on Scientific Programming with Python by H. P. Langtangen, 5th edition, Springer, 2016.
This section takes class ForwardEuler
from the section Class implementation as a starting point for creating more
flexible software where the user can switch problem and numerical
method with very little coding. Also, the developer of the tool must
be able to include a new numerical method with a minimum of coding.
These requirements can be met by utilizing object-oriented
programming.
Numerical methods for ODEs compute approximations \( u_k \) to the exact solution \( u \) at discrete time levels \( t_k \), \( k=1,2,3,\ldots \). Some of the simplest, but also most widely used methods for ODEs are listed below.
The Forward Euler method has the formula $$ \begin{equation} u_{k+1} = u_k + \Delta t\, f(u_k, t_k),\quad \Delta t = t_{k+1}-t_k\tp \tag{34} \end{equation} $$ The Leapfrog method (also called the Midpoint method) involves three time levels and is written as $$ \begin{equation} u_{k+1} = u_{k-1} + 2\Delta t f(u_k, t_k), \quad 2\Delta t = t_{k+1}-t_{k-1} \tag{35} \end{equation} $$ for \( k=1,2,\ldots \). The computation of \( u_1 \) requires \( u_{-1} \), which is unknown, so for the first step we must use another method, for instance, (34). Heun's method is a two-step procedure, $$ \begin{align} u_* &= u_k + \Delta tf(u_k, t_k), \tag{36}\\ u_{k+1} &= u_k + \frac{1}{2}\Delta t f(u_k, t_k) + \frac{1}{2}\Delta t f(u_*, t_{k+1}) \tag{37}, \end{align} $$ with \( \Delta t = t_{k+1}-t_k \). A closely related technique is the 2nd-order Runge-Kutta method, commonly written as $$ \begin{equation} u_{k+1} = u_k + K_2 \tag{38} \end{equation} $$ where $$ \begin{align} K_1 &= \Delta t\,f(u_k, t_k), \tag{39}\\ K_2 &= \Delta t\,f(u_k + \frac{1}{2}K_1, t_k + \frac{1}{2}\Delta t), \tag{40} \end{align} $$ with \( \Delta t = t_{k+1}-t_k \).
The perhaps most famous and most widely used method for solving ODEs is the 4th-order Runge-Kutta method: $$ \begin{equation} u_{k+1} = u_k + \frac{1}{6}\left( K_1 + 2K_2 + 2K_3 + K_4\right), \tag{41} \end{equation} $$ where $$ \begin{align} K_1 &= \Delta t\,f(u_k, t_k), \tag{42}\\ K_2 &= \Delta t\,f(u_k + \frac{1}{2}K_1, t_k + \frac{1}{2}\Delta t), \tag{43}\\ K_3 &= \Delta t\,f(u_k + \frac{1}{2}K_2, t_k + \frac{1}{2}\Delta t), \tag{44}\\ K_4 &= \Delta t\,f(u_k + K3, t_k + \Delta t), \tag{45} \end{align} $$ and \( \Delta t = t_{k+1}-t_k \). Another common technique is the 3rd-order Adams-Bashforth method: $$ \begin{equation} u_{k+1} = u_k + \frac{\Delta t}{12}\left( 23f(u_{k}, t_k) -16 f(u_{k-1}, t_{k-1}) + 5f(u_{k-2}, t_{k-2})\right), \tag{46} \end{equation} $$ with \( \Delta t \) constant. To start the scheme, one can apply a 2nd-order Runge-Kutta method or Heun's method to compute \( u_{1} \) and \( u_2 \) before (46) is applied for \( k\geq 2 \). A more complicated solution procedure is the Midpoint method with iterations: $$ \begin{align} v_q &= u_k + \frac{1}{2}\Delta t\left( f(v_{q-1}, t_{k+1}) + f(u_k,t_k)\right), \tag{47}\\ & \quad q=1,\ldots,N,\ v_0=u_k\nonumber\\ u_{k+1} &= v_N\tp \tag{48} \end{align} $$ At each time level, one runs the formula (47) \( N \) times, and the value \( v_N \) becomes \( u_{k+1} \). Setting \( N=1 \) recovers the Forward Euler scheme if \( f \) is independent of \( t \), while \( N=2 \) corresponds to Heun's method. We can either fix the value of \( N \), or we can repeat (47) until the change in \( v_q \) is small, that is, until \( |v_q-v_{q-1}| < \epsilon \), where \( \epsilon \) is a small value.
Finally, we mention the Backward Euler method: $$ \begin{equation} u_{k+1} = u_k + \Delta t\, f(u_{k+1}, t_{k+1}), \quad\Delta t = t_{k+1}-t_k\tp \tag{49} \end{equation} $$ If \( f(u,t) \) is nonlinear in \( u \), (49) constitutes a nonlinear equation in \( u_{k+1} \), which must be solved by some method for nonlinear equations, say Newton's method.
All the methods listed above are valid both for scalar ODEs and for systems of ODEs. In the system case, the quantities \( u \), \( u_k \), \( u_{k+1} \), \( f \), \( K_1 \), \( K_2 \), etc., are vectors.
The section Class implementation presents a class ForwardEuler
for
implementing the Forward Euler scheme (34) both
for scalar ODEs and systems. Only the advance
method should be
necessary to change in order to implement other numerical methods.
Copying the ForwardEuler
class and editing just the advance
method
is considered bad programming practice, because we get two copies the
general parts of class ForwardEuler
. As we implement more schemes,
we end up with a lot of copies of the same code. Correcting an error
or improving the code in this general part therefore requires identical
edits in several almost identical classes.
A good programming practice is to collect all the common code in a
superclass. Subclasses can implement the advance
method, but share
the constructor, the set_initial_condition
method, and the solve
method with the superclass.
We introduce class ODESolver
as the superclass of various numerical
methods for solving ODEs. Class ODESolver
should provide all
functionality that is common to all numerical methods for ODEs:
u
t
f(u, t)
k
ForwardEuler
in
the sections Class implementation and Class implementation, we implement
the last point as two methods: solve
for performing the time loop
and advance
for advancing the solution one time step. The latter
method is empty in the superclass since the method is to be
implemented by various subclasses for various specific numerical schemes.
A first version of class ODESolver
follows directly from class
ForwardEuler
in the section Class implementation, but letting
advance
be an empty method. However, there is one more extension
which will be handy in some problems, namely a possibility for the
user to terminate the simulation if the solution has certain
properties. Throwing a ball yields an example: the simulation should
be stopped when the ball hits the ground, instead of simulating an
artificial movement down in the ground until the final time \( T \) is
reached. To implement the requested feature, the user can provide a
function terminate(u, t, step_no)
, which returns True
if the time
loop is be terminated. The arguments are the solution array u
, the
corresponding time points t
, and the current time step number
step_no
. For example, if we want to solve an ODE until the solution
is (close enough to) zero, we can supply the function
def terminate(u, t, step_no):
eps = 1.0E-6 # small number
return abs(u[step_no]) < eps # close enough to zero?
The terminate
function is an optional argument to the solve
method.
By default, a function that always returns False
is used.
The suggested code for the superclass ODESolver
takes the following
form:
class ODESolver(object):
def __init__(self, f):
self.f = lambda u, t: np.asarray(f(u, t), float)
def advance(self):
"""Advance solution one time step."""
raise NotImplementedError
def set_initial_condition(self, U0):
if isinstance(U0, (float,int)): # scalar ODE
self.neq = 1
U0 = float(U0)
else: # system of ODEs
U0 = np.asarray(U0)
self.neq = U0.size
self.U0 = U0
def solve(self, time_points, terminate=None):
if terminate is None:
terminate = lambda u, t, step_no: False
self.t = np.asarray(time_points)
n = self.t.size
if self.neq == 1: # scalar ODEs
self.u = np.zeros(n)
else: # systems of ODEs
self.u = np.zeros((n,self.neq))
# Assume that self.t[0] corresponds to self.U0
self.u[0] = self.U0
# Time loop
for k in range(n-1):
self.k = k
self.u[k+1] = self.advance()
if terminate(self.u, self.t, self.k+1):
break # terminate loop over k
return self.u[:k+2], self.t[:k+2]
Note that we return just the parts of self.u
and self.t
that have been filled with values (the rest are zeroes): all
elements up to the one with index k+1
are computed before
terminate
may return True
. The corresponding slice
of the array is then :k+2
since the upper limit is not included
in the slice. If terminate
never returns True
we simply
have that :k+1
is the entire array.
Subclasses implement specific numerical formulas for
numerical solution of ODEs in the advance
method.
The Forward Euler scheme (34) is
implemented by defining the subclass name and copying the
advance
method from the ForwardEuler
class in the section Class implementation or Class implementation:
class ForwardEuler(ODESolver):
def advance(self):
u, f, k, t = self.u, self.f, self.k, self.t
dt = t[k+1] - t[k]
u_new = u[k] + dt*f(u[k], t[k])
return u_new
self
prefix.
When we extract
data attributes to local variables with short names, we should only use
these local variables for reading values, not setting values.
For example, if we do a k += 1
to update the time step
counter, that increased value is not reflected in self.k
(which is the "official" counter). On the other hand, changing a
list in-place, say u[k+1] = ...
, is reflected in self.u
.
Extracting data attributes
in local variables is done for getting the code closer to the
mathematics, but has a danger of introducing bugs that might be
hard to track down.
Below is an implementation of the 4th-order Runge-Kutta method (41):
class RungeKutta4(ODESolver):
def advance(self):
u, f, k, t = self.u, self.f, self.k, self.t
dt = t[k+1] - t[k]
dt2 = dt/2.0
K1 = dt*f(u[k], t[k])
K2 = dt*f(u[k] + 0.5*K1, t[k] + dt2)
K3 = dt*f(u[k] + 0.5*K2, t[k] + dt2)
K4 = dt*f(u[k] + K3, t[k] + dt)
u_new = u[k] + (1/6.0)*(K1 + 2*K2 + 2*K3 + K4)
return u_new
It is left as exercises to implement other numerical methods in the
ODESolver
class hierarchy. However, the Backward Euler method
(49) requires a much more
advanced implementation than the other methods so that particular
method deserves its own section.
The Backward Euler scheme (49) leads in general to a nonlinear equation at a new time level, while all the other schemes listed in the section Numerical methods have a simple formula for the new \( u_{k+1} \) value. The nonlinear equation reads $$ \begin{equation*} u_{k+1} = u_k + \Delta t\, f(u_{k+1}, t_{k+1})\tp \end{equation*} $$ For simplicity we assume that the ODE is scalar so the unknown \( u_{k+1} \) is a scalar. It might be easier to see that the equation for \( u_{k+1} \) is nonlinear if we rearrange the equation to $$ \begin{equation} F(w) \equiv w - \Delta t f(w,t_{k+1}) - u_k = 0, \tag{50} \end{equation} $$ where \( w=u_{k+1} \). If now \( f(u,t) \) is a nonlinear function of \( u \), \( F(w) \) will also be a nonlinear function of \( w \).
To solve \( F(w)=0 \) we can use the Bisection method, Newton's method. or the Secant method. Here we apply Newton's method and the implementation given in src/diffeq/Newton.py. A disadvantage with Newton's method is that we need the derivative of \( F \) with respect to \( w \), which requires the derivative \( \partial f(w,t)/\partial w \). A quick solution is to use a numerical derivative.
We make a subclass BackwardEuler
. As we need to solve
\( F(w)=0 \) at every time step, we also need to implement
the \( F(w) \) function. This is conveniently done in a local function inside
the advance
method:
def advance(self):
u, f, k, t = self.u, self.f, self.k, self.t
def F(w):
return w - dt*f(w, t[k+1]) - u[k]
dFdw = Derivative(F)
w_start = u[k] + dt*f(u[k], t[k]) # Forward Euler step
u_new, n, F_value = self.Newton(F, w_start, dFdw, N=30)
if n >= 30:
print "Newton's failed to converge at t=%g "\
"(%d iterations)" % (t, n)
return u_new
The local variables in the advance
function, such as dt
and u
, act as "global" variables
for the F
function. Hence, when F
is sent away to
some self.Newton
function, F
remembers the values of
dt
, f
, t
, and u
(!).
The derivative
\( dF/dw \) is in our advance
function
computed numerically by a class Derivative
,
because we now want to
use a more accurate, centered formula:
class Derivative(object):
def __init__(self, f, h=1E-5):
self.f = f
self.h = float(h)
def __call__(self, x):
f, h = self.f, self.h
return (f(x+h) - f(x-h))/(2*h)
This code is included in the ODESolver.py file.
The next step is to call Newton's method. For this purpose we need to
import the Newton
function from the Newton
module.
The Newton.py
file must then reside in the same directory as
ODESolver.py
, or Newton.py
must be in one of the directories
listed in the sys.path
list or the PYTHONPATH
environment
variable.
Having the Newton(f, x_start, dfdx, N)
function from
the section ref{sec:diffeq:Newtonsmethod:sec} accessible in our
ODESolver.py
file, we can make a call and
supply our F
function as the argument f
, a start value
for the iteration, here called w_start
, as the argument x
,
and the derivative dFdw
for the argument dfdx
.
We rely on default values for the epsilon
and store
arguments, while the maximum number of iterations is set to
N=30
. The program is terminated if it happens that the number of
iterations exceeds that value, because then the method is not considered
to have converged (at least not quickly enough),
and we have consequently not been able to compute the next \( u_{k+1} \) value.
The starting value for Newton's method must be chosen. As we expect
the solution to not change much from one time level to the next,
\( u_k \) could be a good initial guess. However, we can do better by
using a simple Forward Euler step \( u_k + \Delta t f(u_k,t_k) \), which is
exactly what we do in the advance
function above.
Since Newton's method always has the danger of converging slowly, it can
be interesting to store the number of iterations at each time level
as a data attribute in the BackwardEuler
class. We can easily insert
extra statement for this purpose:
def advance(self):
...
u_new, n, F_value = Newton(F, w_start, dFdw, N=30)
if k == 0:
self.Newton_iter = []
self.Newton_iter.append(n)
...
Note the need for creating an empty list (at the first call of advance
)
before we can append elements.
There is now one important question to ask: will the advance
method
work for systems of ODEs? In that case, \( F(w) \) is a vector of
functions. The implementation of F
will work when w
is a vector,
because all the quantities involved in the formula are arrays or
scalar variables. The dFdw
instance will compute a numerical
derivative of each component of the vector function dFdw.f
(which is
simply our F
function). The call to the Newton
function is more
critical: It turns out that this function, as the algorithm behind it,
works for scalar equations only. Newton's method can quite easily be
extended to a system of nonlinear equations, but we do not consider
that topic here. Instead we equip class BackwardEuler
with a
constructor that calls the f
object and controls that the returned
value is a float
and not an array:
class BackwardEuler(ODESolver):
def __init__(self, f):
ODESolver.__init__(self, f)
# Make a sample call to check that f is a scalar function:
try:
u = np.array([1]); t = 1
value = f(u, t)
except IndexError: # index out of bounds for u
raise ValueError('f(u,t) must return float/int')
Observe that we must explicitly call the superclass constructor
and pass on the argument f
to achieve the right
storage and treatment of this argument.
Understanding class BackwardEuler
implies a good
understanding of classes in general; a good understanding of numerical
methods for ODEs, for numerical differentiation, and
for finding roots of functions; and a good understanding on how to
combine different code segments from different parts of the document.
Therefore, if you have digested class BackwardEuler
, you have
all reasons to believe that you have digested the key topics of
this document.
The fundamental problem with testing approximate numerical methods is that we do not normally know what the output from the computer should be. In some special cases, however, we can find an exact solution of the discretized problem that the computer program solves. This exact solution should be reproduced to machine precision by the program. It turns out that most numerical methods for ordinary differential equations are able to exactly reproduce a linear solution. That is, if the solution of the differential equation is \( u=at+b \), the numerical method will produce the same solution: \( u_k=ak\Delta t + b \). We can use this knowledge to make a test function for verifying our implementations.
Let \( u=at+b \) be the solution of the test problem. A corresponding ODE is obviously \( u^\prime =a \), with \( u(0)=b \). A more demanding ODE arises from adding a term that is zero, e.g., \( (u - (at + b))^5 \). We therefore aim to solve $$ u^\prime = a + (u - (at + b))^5,\quad u(0)=b\tp$$
Our test function loops over registered solvers in the ODESolver
hierarchy, solves the test problem, and checks that the maximum
deviation between the computed solution and the exact linear solution
is within a tolerance:
registered_solver_classes = [
ForwardEuler, RungeKutta4, BackwardEuler]
def test_exact_numerical_solution():
a = 0.2; b = 3
def f(u, t):
return a + (u - u_exact(t))**5
def u_exact(t):
"""Exact u(t) corresponding to f above."""
return a*t + b
U0 = u_exact(0)
T = 8
n = 10
tol = 1E-15
t_points = np.linspace(0, T, n)
for solver_class in registered_solver_classes:
solver = solver_class(f)
solver.set_initial_condition(U0)
u, t = solver.solve(t_points)
u_e = u_exact(t)
max_error = (u_e - u).max()
msg = '%s failed with max_error=%g' % \
(solver.__class__.__name__, max_error)
assert max_error < tol, msg
Note how we can make a loop over class types (because the class is
an ordinary object in Python). New subclasses can add
their class type to the registered_solver_classes
list and the
test function will include such new classes in the test as well.
A more general testing technique is based on knowing how the error in a numerical method varies with the discretization parameter, here \( \Delta t \). Say we know that a particular method has an error that decays as \( \Delta t^2 \). In a problem where the exact analytical solution is known, we can run the numerical method for several values of \( \Delta t \) and compute the corresponding numerical error in each case. If the computed errors decay like \( \Delta t^2 \), it brings quite strong evidence for a correct implementation. Such tests are called convergence tests and constitute the most general tool we have for verifying implementations of numerical algorithms. Exercise 37: Study convergence of numerical methods for ODEs gives an introduction to the topic.
Let us apply the classes in the ODESolver
hierarchy to see how they
solve the perhaps simplest of all ODEs: \( u^{\prime}=-u \), with initial
condition \( u(0)=1 \). The exact solution is \( u(t)=e^{-t} \), which decays
exponentially with time. Application of class ForwardEuler
to solve
this problem requires writing the following code:
import ODESolver
def f(u, t):
return -u
solver = ODESolver.ForwardEuler(f)
solver.set_initial_condition(1.0)
t_points = linspace(0, 3, 31)
u, t = solver.solve(t_points)
plot(t, u)
We can run various values of \( \Delta t \) to see the effect on the accuracy:
# Test various dt values and plot
figure()
legends = []
T = 3
for dt in 2.0, 1.0, 0.5, 0.1:
n = int(round(T/dt))
solver = ODESolver.ForwardEuler(f)
solver.set_initial_condition(1)
u, t = solver.solve(linspace(0, T, n+1))
plot(t, u)
legends.append('dt=%g' % dt)
hold('on')
plot(t, exp(-t), 'bo')
legends.append('exact')
legend(legends)
Figure 4 shows alarming results. With \( \Delta t=2 \) we get a completely wrong solution that becomes negative and then increasing. The value \( \Delta t=1 \) gives a peculiar solution: \( u_k=0 \) for \( k\geq 1 \)! Qualitatively correct behavior appears with \( \Delta t=0.5 \), and the results get quantitatively better as we decrease \( \Delta t \). The solution corresponding to \( \Delta =0.1 \) looks good from the graph.
Such strange results reveal that we most likely have programming errors in our implementation. Fortunately, we did some verification of the implementations in the section Verification, so it might well happen that what we observe in the experiments are problems with the numerical method and not with the implementation.
We can in fact easily explain what we observe in Figure 4. For the equation in question, the Forward Euler method computes $$ \begin{align*} u_1 &= u_0 - \Delta t u_0 = (1-\Delta t)u_0,\\ u_2 &= u_1 - \Delta t u_1 = (1-\Delta t)u_1 = (1-\Delta t)^2u_0,\\ &\vdots\\ u_k &= (1-\Delta t)^ku_0\tp \end{align*} $$ With \( \Delta t=1 \) we simply get \( u_k=0 \) for \( k\geq 1 \). For \( \Delta t > 1 \), \( 1-\Delta t < 0 \), and \( (1-\Delta t)^k \) means raising a negative value to an integer power, which results in \( u_k>0 \) for even \( k \) and \( u_k < 0 \) for odd \( k \). Moreover, \( |u_k| \) decreases with \( k \). Such a growing, oscillating solution is of course qualitatively wrong when the exact solution is \( e^{-t} \) and monotonically decaying. The conclusion is that the Forward Euler method gives meaningless results for \( \Delta t \geq 1 \) in the present example.
A particular strength of the ODESolver
hierarchy of classes is that
we can trivially switch from one method to another. For example, we
may demonstrate how superior the 4-th order Runge-Kutta method is for
this equation: just replace ForwardEuler
by RungeKutta4
in the
previous code segment and re-run the program. It turns out that the
4-th order Runge-Kutta method gives a monotonically decaying numerical
solution for all the tested \( \Delta t \) values. In particular, the
solutions corresponding to \( \Delta t =0.5 \) and \( \Delta t =0.1 \) are
visually very close to the exact solution. The conclusion is that the
4-th order Runge-Kutta method is a safer and more accurate method.
Let us compare the two numerical methods in the case where \( \Delta t=0.5 \):
# Test ForwardEuler vs RungeKutta4
figure()
legends = []
T = 3
dt = 0.5
n = int(round(T/dt))
t_points = linspace(0, T, n+1)
for solver_class in ODESolver.RungeKutta4, ODESolver.ForwardEuler:
solver = solver_class(f)
solver.set_initial_condition(1)
u, t = solver.solve(t_points)
plot(t, u)
legends.append('%s' % solver_class.__name__)
hold('on')
plot(t, exp(-t), 'bo')
legends.append('exact')
legend(legends)
Figure 5 illustrates that differences in accuracy between the two methods. The complete program can be found in the file app1_decay.py.
The logistic ODE (5)
is copied here for convenience:
$$
\begin{equation*} u^{\prime}(t)=\alpha u(t)\left( 1-\frac{u(t)}{R}\right),\quad u(0)=U_0\tp\end{equation*}
$$
The right-hand side contains the parameters \( \alpha \) and \( R \).
We know that \( u\rightarrow R \) as \( t\rightarrow\infty \), so at some point
\( \hat t \) in time we have approached the asymptotic value \( u=R \) within
a sufficiently small tolerance and should stop the simulation.
This can be done by providing a function as the
tolerance
argument in the solve
method.
Let us, as in the section Logistic growth via a class-based approach, implement the problem-dependent data in a class. This time we store all user-given physical data in the class:
import ODESolver
from scitools.std import plot, figure, savefig, title, show
#from matplotlib.pyplot import plot, figure, savefig, title, show
import numpy as np
class Problem(object):
def __init__(self, alpha, R, U0, T):
"""
alpha, R: parameters in the ODE.
U0: initial condition.
T: max length of time interval for integration;
asympotic value R must be reached within 1%
accuracy for some t <= T.
"""
self.alpha, self.R, self.U0, self.T = alpha, R, U0, T
def __call__(self, u, t):
"""Return f(u,t) for logistic ODE."""
return self.alpha*u*(1 - u/self.R)
def terminate(self, u, t, step_no):
"""Return True when asymptotic value R is reached."""
tol = self.R*0.01
return abs(u[step_no] - self.R) < tol
def __str__(self):
"""Pretty print of physical parameters."""
return 'alpha=%g, R=%g, U0=%g' % \
(self.alpha, self.R, self.U0)
Note that the tolerance used in the terminate
method is made
dependent on the size of \( R \): \( |u-R|/R < 0.01 \). For example, if
\( R=1000 \) we say the asymptotic value is reached when \( u\geq
990 \). Smaller tolerances will just lead to a solution curve where
large parts of it show the boring behavior \( u\approx R \).
The solution is obtained the usual way by short code:
solver = ODESolver.RungeKutta4(problem)
solver.set_initial_condition(problem.U0)
dt = 1.0
n = int(round(problem.T/dt))
t_points = np.linspace(0, T, n+1)
u, t = solver.solve(t_points, problem.terminate)
Let us pack these statements into a class Solver
, which has
two methods: solve
and plot
, and add some documentation
and flexibility. The code may look like
class Solver(object):
def __init__(self, problem, dt,
method=ODESolver.ForwardEuler):
"""
problem: instance of class Problem.
dt: time step.
method: class in ODESolver hierarchy.
"""
self.problem, self.dt = problem, dt
self.solver = method
def solve(self):
solver = self.method(self.problem)
solver.set_initial_condition(self.problem.U0)
n = int(round(self.problem.T/self.dt))
t_points = np.linspace(0, self.problem.T, n+1)
self.u, self.t = solver.solve(t_points,
self.problem.terminate)
# The solution terminated if the limiting value was reached
if solver.k+1 == n: # no termination - we reached final T
self.plot()
raise ValueError(
'termination criterion not reached, '\
'give T > %g' % self.problem.T)
def plot(self):
filename = 'logistic_' + str(self.problem) + '.pdf'
plot(self.t, self.u)
title(str(self.problem) + ', dt=%g' % self.dt)
savefig(filename)
show()
Problem-dependent data related to the numerical quality of the solution,
such as the time step here, go to the Solver
class.
That is, class Problem
contains the physics and class Solver
the numerics of the problem under investigation.
If the last computed time step, solver.k+1
,
equals the last possible index, n
, problem.terminate
never returned True
, which means that the asymptotic limit
was not reached. This is treated as an erroneous condition.
To guide the user, we launch a plot before raising the exception
with an instructive message. The complete code is found in the
file app2_logistic.py.
Choosing an appropriate \( \Delta t \) is not always so easy. The impact of \( \Delta t \) can sometimes be dramatic, as demonstrated for the Forward Euler method in the section Example: Exponential decay. We could automate the process of finding a suitable \( \Delta t \): start with a large \( \Delta t \), and keep halving \( \Delta t \) until the difference between two solutions corresponding to two consequtive \( \Delta t \) values is small enough.
Say solver
is a class Solver
instance computed with
time step \( \Delta t \) and solver2
is the instance corresponding
to a computation with \( \Delta t/2 \). Calculating the difference between
solver.u
and solver2.u
is not trivial as one of
the arrays has approximately twice as many elements as the other, and
the last element in both arrays does not necessarily correspond to
the same time value since the time stepping and the terminate
function may lead to slightly different termination times.
A solution to
these two problems is to turn each of the arrays solver.u
and
solver2.u
into continuous functions, as explained
in the section From discrete to continuous solution, and then evaluate the
difference at some selected time points up to the smallest value of
solver.t[-1]
and solver2.t[-1]
.
The code becomes
# Make continuous functions u(t) and u2(t)
u = wrap2callable((solver. t, solver. u))
u2 = wrap2callable((solver2.t, solver2.u))
# Sample the difference in n points in [0, t_end]
n = 13
t_end = min(solver2.t[-1], solver.t[-1])
t = np.linspace(0, t_end, n)
u_diff = np.abs(u(t) - u2(t)).max()
The next step is to introduce a loop where we halve
the time step in each iteration and solve the logistic ODE with the
new time step and compute u_diff
as shown above.
A complete function takes the form
def find_dt(problem, method=ODESolver.ForwardEuler,
tol=0.01, dt_min=1E-6):
"""
Return a "solved" class Solver instance where the
difference in the solution and one with a double
time step is less than tol.
problem: class Problem instance.
method: class in ODESolver hierarchy.
tol: tolerance (chosen relative to problem.R).
dt_min: minimum allowed time step.
"""
dt = problem.T/10 # start with 10 intervals
solver = Solver(problem, dt, method)
solver.solve()
from scitools.std import wrap2callable
good_approximation = False
while not good_approximation:
dt = dt/2.0
if dt < dt_min:
raise ValueError('dt=%g < %g - abort' % (dt, dt_min))
solver2 = Solver(problem, dt, method)
solver2.solve()
# Make continuous functions u(t) and u2(t)
u = wrap2callable((solver. t, solver. u))
u2 = wrap2callable((solver2.t, solver2.u))
# Sample the difference in n points in [0, t_end]
n = 13
t_end = min(solver2.t[-1], solver.t[-1])
t = np.linspace(0, t_end, n)
u_diff = np.abs(u(t) - u2(t)).max()
print u_diff, dt, tol
if u_diff < tol:
good_approximation = True
else:
solver = solver2
return solver2
Setting the tolerance tol
must be done with a view to the
typical size of \( u \), i.e., the size of \( R \). With \( R=100 \) and
tol=1
, the Forward Euler method meets the tolerance for
\( \Delta t =0.25 \). Switching to the 4-th order Runge-Kutta method
makes \( \Delta t = 1.625 \) sufficient to meet the tolerance.
Note that although the latter method can use a significantly larger
time step, it also involves four times as many evaluations of the
right-hand side function at each time step.
Finally, we show how to make a class that behaves as class Solver
,
but with automatic computation of the time step. If we do not provide
a dt
parameter to the constructor, the find_dt
function just
presented is used to compute dt
and the solution, otherwise we use
the standard Solver.solve
code. This new class is conveniently
realized as a subclass of Solver
where we override the constructor
and the solve
method. The plot
method can be inherited as is. The
code becomes
class AutoSolver(Solver):
def __init__(self, problem, dt=None,
method=ODESolver.ForwardEuler,
tol=0.01, dt_min=1E-6):
Solver.__init__(self, problem, dt, method)
if dt is None:
solver = find_dt(self.problem, method,
tol, dt_min)
self.dt = solver.dt
self.u, self.t = solver.u, solver.t
def solve(self, method=ODESolver.ForwardEuler):
if hasattr(self, 'u'):
# Solution was computed by find_dt in constructor
pass
else:
Solver.solve(self)
The call hasattr(self, 'u')
returns True
if u
is a data attribute in object self
. Here this is used as an indicator
that the solution was computed in the constructor by the
find_dt
function. A typical use is
problem = Problem(alpha=0.1, R=500, U0=2, T=130)
solver = AutoSolver(problem, tol=1)
solver.solve(method=ODESolver.RungeKutta4)
solver.plot()
The carrying capacity of the environment, \( R \), may vary with time, e.g., due to seasonal changes. Can we extend the previous code so that \( R \) can be specified either as a constant or as a function of time?
This is in fact easy if we in the implementation of the right-hand side
function assume that \( R \) is a function of time. If \( R \) is given as
a constant in the constructor of class Problem
, we just
wrap it as a function of time:
if isinstance(R, (float,int)): # number?
self.R = lambda t: R
elif callable(R):
self.R = R
else:
raise TypeError(
'R is %s, has to be number of function' % type(R))
The terminate
method is also affected as we need to
base the tolerance on the \( R \) value at the present time level.
Also the __str__
method must be changed since it is not
meaningful to print a function self.R
.
That is, all methods in the generalized problem class, here
called Problem2
, must be
altered. We have not chosen to make Problem2
a subclass of
Problem
, even though the interface is the same and the
two classes are closely related. While Problem
is
clearly a special case of Problem2
, as a constant \( R \) is a
special case of a function \( (R) \), the opposite case is not
true.
Class Problem2
becomes
class Problem2(Problem):
def __init__(self, alpha, R, U0, T):
"""
alpha, R: parameters in the ODE.
U0: initial condition.
T: max length of time interval for integration;
asympotic value R must be reached within 1%
accuracy for some t <= T.
"""
self.alpha, self.U0, self.T = alpha, U0, T
if isinstance(R, (float,int)): # number?
self.R = lambda t: R
elif callable(R):
self.R = R
else:
raise TypeError(
'R is %s, has to be number of function' % type(R))
def __call__(self, u, t):
"""Return f(u,t) for logistic ODE."""
return self.alpha*u*(1 - u/self.R(t))
def terminate(self, u, t, step_no):
"""Return True when asymptotic value R is reached."""
tol = self.R(t[step_no])*0.01
return abs(u[step_no] - self.R(t[step_no])) < tol
def __str__(self):
return 'alpha=%g, U0=%g' % (self.alpha, self.U0)
We can compute the case where \( R=500 \) for \( t < 60 \) and then reduced to \( R=100 \) because of an environmental crisis (see Figure 6):
problem = Problem2(alpha=0.1, U0=2, T=130,
R=lambda t: 500 if t < 60 else 100)
solver = AutoSolver(problem, tol=1)
solver.solve(method=ODESolver.RungeKutta4)
solver.plot()
Note the use of a lambda function
to save some typing when specifying R
.
The corresponding graph is made of two parts, basically
exponential growth until the environment changes and then
exponential reduction until \( u \) approaches the new \( R \) value and the
change in \( u \) becomes small.
Our final version of the problem class is equipped with functionality
for reading data from the command line in addition to setting data
explicitly in the program. We use the argparse
module.
The idea now is
to have a constructor that just sets default values. Then we have
a method for defining the command-line arguments and a method
for transforming the argparse
information to the
attributes alpha
, U0
, R
, and T
.
The R
attribute is supposed to be a function, and we use
the StringFunction
tool to turn strings from the command-line
into a Python function of time t
.
The code of our new problem class is listed next.
class Problem3(Problem):
def __init__(self):
# Set default parameter values
self.alpha = 1.
self.R = StringFunction('1.0', independent_variable='t')
self.U0 = 0.01
self.T = 4.
def define_command_line_arguments(self, parser):
"""Add arguments to parser (argparse.ArgumentParser)."""
def evalcmlarg(text):
return eval(text)
def toStringFunction(text):
return StringFunction(text, independent_variable='t')
parser.add_argument(
'--alpha', dest='alpha', type=evalcmlarg,
default=self.alpha,
help='initial growth rate in logistic model')
parser.add_argument(
'--R', dest='R', type=toStringFunction, default=self.R,
help='carrying capacity of the environment')
parser.add_argument(
'--U0', dest='U0', type=evalcmlarg, default=self.U0,
help='initial condition')
parser.add_argument(
'--T', dest='T', type=evalcmlarg, default=self.T,
help='integration in time interval [0,T]')
return parser
def set(self, **kwargs):
"""
Set parameters as keyword arguments alpha, R, U0, or T,
or as args (object returned by parser.parse_args()).
"""
for prm in ('alpha', 'U0', 'R', 'T'):
if prm in kwargs:
setattr(self, prm, kwargs[prm])
if 'args' in kwargs:
args = kwargs['args']
for prm in ('alpha', 'U0', 'R', 'T'):
if hasattr(args, prm):
setattr(self, prm, getattr(args, prm))
else:
print 'Really strange', dir(args)
def __call__(self, u, t):
"""Return f(u,t) for logistic ODE."""
return self.alpha*u*(1 - u/self.R(t))
def terminate(self, u, t, step_no):
"""Return True when asymptotic value R is reached."""
tol = self.R(t[step_no])*0.01
return abs(u[step_no] - self.R(t[step_no])) < tol
def __str__(self):
s = 'alpha=%g, U0=%g' % (self.alpha, self.U0)
if isinstance(self.R, StringFunction):
s += ', R=%s' % str(self.R)
return s
The calls to parser.add_argument
are straightforward, but notice
that we allow strings for \( \alpha \), \( U_0 \), and \( T \) to be interpreted
by eval
. The string for \( R \) is interpreted as a formula by
StringFunction
. The set
method is flexible: it accepts any set of
keyword arguments, and first checks if these are the names of the
problem parameters, and thereafter if args='
is given, the
parameters are taken from the command line. The rest of the class is
very similar to earlier versions.
The typical use of class Problem3
is shown below. First we
set parameters directly:
problem = Problem3()
problem.set(alpha=0.1, U0=2, T=130,
R=lambda t: 500 if t < 60 else 100)
solver = AutoSolver(problem, tol=1)
solver.solve(method=ODESolver.RungeKutta4)
solver.plot()
Then we rely on reading parameters from the command line:
problem = Problem3()
import argparse
parser = argparse.ArgumentParser(
description='Logistic ODE model')
parser = problem.define_command_line_arguments(parser)
# Try --alpha 0.11 --T 130 --U0 2 --R '500 if t < 60 else 300'
args = parser.parse_args()
problem.set(args=args)
solver = AutoSolver(problem, tol=1)
solver.solve(method=ODESolver.RungeKutta4)
solver.plot()
The last example using a problem class integrated with the command line is the most flexible way of implementing ODE models.
The motion of a box attached to a spring, can be modeled by two first-order differential equations as listed in (33), here repeated with \( F(t) =mw''(t) \), where the \( w(t) \) function is the forced movement of the end of the spring. $$ \begin{align*} {du^{(0)}\over dt} &= u^{(1)},\\ {du^{(1)}\over dt} &= w''(t) + g - m^{-1}\beta u^{(1)} - m^{-1}ku^{(0)}\tp \end{align*} $$
The code related to this example is found in
app3_osc.py.
Because our right-hand side
\( f \) contains several parameters, we implement it as a class
with the parameters as data attributes and a __call__
method for returning the 2-vector \( f \). We assume that the user of
the class supplies the \( w(t) \) function, so it is natural to
compute \( w''(t) \) by a finite difference formula.
class OscSystem:
def __init__(self, m, beta, k, g, w):
self.m, self.beta, self.k, self.g, self.w = \
float(m), float(beta), float(k), float(g), w
def __call__(self, u, t):
u0, u1 = u
m, beta, k, g, w = \
self.m, self.beta, self.k, self.g, self.w
# Use a finite difference for w''(t)
h = 1E-5
ddw = (w(t+h) - 2*w(t) + w(t-h))/(h**2)
f = [u1, ddw + g - beta/m*u1 - k/m*u0]
return f
A simple test case arises if we set \( m=k=1 \) and \( \beta = g = w=0 \): $$ \begin{align*} {du^{(0)}\over dt} &= u^{(1)},\\ {du^{(1)}\over dt} &= -u^{(0)}\tp \end{align*} $$ Suppose that \( u^{(0)}(0)=1 \) and \( u^{(1)}(0)=0 \). An exact solution is then $$ \begin{equation*} u^{(0)}(t)=\cos t,\quad u^{(1)}(t)=-\sin t\tp\end{equation*} $$ We can use this case to check how the Forward Euler method compares with the 4-th order Runge-Kutta method:
import ODESolver
from scitools.std import *
#from matplotlib.pyplot import *
legends = []
f = OscSystem(1.0, 0.0, 1.0, 0.0, lambda t: 0)
u_init = [1, 0] # initial condition
nperiods = 3.5 # no of oscillation periods
T = 2*pi*nperiods
for solver_class in ODESolver.ForwardEuler, ODESolver.RungeKutta4:
if solver_class == ODESolver.ForwardEuler:
npoints_per_period = 200
elif solver_class == ODESolver.RungeKutta4:
npoints_per_period = 20
n = npoints_per_period*nperiods
t_points = linspace(0, T, n+1)
solver = solver_class(f)
solver.set_initial_condition(u_init)
u, t = solver.solve(t_points)
# u is an array of [u0,u1] pairs for each time level,
# get the u0 values from u for plotting
u0_values = u[:, 0]
u1_values = u[:, 1]
u0_exact = cos(t)
u1_exact = -sin(t)
figure()
alg = solver_class.__name__ # (class) name of algorithm
plot(t, u0_values, 'r-',
t, u0_exact, 'b-')
legend(['numerical', 'exact']),
title('Oscillating system; position - %s' % alg)
savefig('tmp_oscsystem_pos_%s.pdf' % alg)
figure()
plot(t, u1_values, 'r-',
t, u1_exact, 'b-')
legend(['numerical', 'exact'])
title('Oscillating system; velocity - %s' % alg)
savefig('tmp_oscsystem_vel_%s.pdf' % alg)
show()
For this particular application it turns out that the 4-th order Runge-Kutta is very accurate, even with few (20) time steps per oscillation. Unfortunately, the Forward Euler method leads to a solution with increasing amplitude in time. Figure 7 shows a comparison between the two methods. Note that the Forward Euler method uses 10 times as many time steps as the 4-th order Runge-Kutta method and is still much less accurate. A very much smaller time step is needed to limit the growth of the Forward Euler scheme for oscillating systems.
The two-dimensional motion of a ball or projectile, neglecting air resistance, is governed by the following two second-order differential equations: $$ \begin{align} {d^2 x\over dt^2} = 0, \tag{51}\\ {d^2 y\over dt^2} = -g, \tag{52} \end{align} $$ where \( (x,y) \) is the position of the ball (\( x \) is a horizontal measure and \( y \) is a vertical measure), and \( g \) is the acceleration of gravity. To use numerical methods for first-order equations, we must rewrite the system of two second-order equations as a system of four first-order equations. This is done by introducing to new unknowns, the velocities \( v_x = dx/dt \) and \( v_y=dy/dt \). We then have the first-order system of ODEs $$ \begin{align} \frac{dx}{dt} &= v_x, \tag{53}\\ {dv_x\over dt} &= 0, \tag{54}\\ {dy\over dt} &= v_y, \tag{55}\\ {dv_y\over dt} &= -g\tp \tag{56} \end{align} $$ The initial conditions are $$ \begin{align} x(0)&= 0, \tag{57}\\ v_x(0)&= v_0\cos\theta, \tag{58}\\ y(0)&= y_0, \tag{59}\\ v_y(0)&= v_0\sin\theta, \tag{60} \end{align} $$ where \( v_0 \) is the initial magnitude of the velocity of the ball. The initial velocity has a direction that makes the angle \( \theta \) with the horizontal.
The code related to this example is found in app4_ball.py. A function returning the right-hand side of our ODE system reads
def f(u, t):
x, vx, y, vy = u
g = 9.81
return [vx, 0, vy, -g]
It makes sense to solve the ODE system as long as the ball as above
the ground, i.e., as long as \( y\geq 0 \). We must therefore supply
a terminate
function as explained in the section The ODESolver class hierarchy:
def terminate(u, t, step_no):
y = u[:,2] # all the y coordinates
return y[step_no] < 0
Observe that all the \( y \) values are given by u[:,2]
and we
want to test the value at the current step, which becomes u[step_no,2]
.
The main program for solving the ODEs can be set up as
v0 = 5
theta = 80*pi/180
U0 = [0, v0*cos(theta), 0, v0*sin(theta)]
T = 1.2; dt = 0.01; n = int(round(T/dt))
solver = ODESolver.ForwardEuler(f)
solver.set_initial_condition(U0)
def terminate(u, t, step_no):
return False if u[step_no,2] >= 0 else True
u, t = solver.solve(linspace(0, T, n+1), terminate)
Now, u[:,0]
represents all the \( x(t) \) values,
u[:,1]
all the \( v_x(t) \) values,
u[:,2]
all the \( y(t) \) values, and
u[:,3]
all the \( v_y(t) \) values.
To plot the trajectory, \( y \) versus \( x \), we write
x = u[:,0]
y = u[:,2]
plot(x, y)
Figure 8 shows a comparison of the numerical and the exact solution in this simple test problem. Note that even if we are just interested in \( y \) as a function of \( x \), we first need to solve the complete ODE system for \( x(t) \), \( v_x(t) \), \( y(t) \), and \( v_y(t) \).
The real strength of the numerical approach is the ease with which we can add air resistance and lift to the system of ODEs. Insight in physics is necessary to derive what the additional terms are, but implementing the terms is trivial in our program above (do Exercise 39: Add the effect of air resistance on a ball).
The ODESolver
hierarchy is a simplified prototype version of
a more professional Python package for solving ODEs called Odespy.
This package features a range of simple and sophisticated methods
for solving scalar ODEs and systems of ODEs. Some of the solvers
are implemented in Python, while others call up well-known
ODE software in Fortran. Like the ODESolver
hierarchy,
Odespy offers a unified interface to the different numerical
methods, which means that the user can specify the ODE problem
as a function f(u,t)
and send this function to all solvers.
This feature makes it easy to switch between solvers to test
a wide collection of numerical methods for a problem.
Odespy can be downloaded from http://hplgit.github.com/odespy.
It is installed by the usual python setup.py install
command.