This chapter is taken from the book A Primer on Scientific Programming with Python by H. P. Langtangen, 5th edition, Springer, 2016.
Functions are useful (i) when a set of commands are to be executed several times, or (ii) to partition the program into smaller pieces to gain better overview. Function arguments are local variables inside the function whose values are set when calling the function. Remember that when you write the function, the values of the arguments are not known. Here is an example of a function for polynomials of 2nd degree:
# function definition:
def quadratic_polynomial(x, a, b, c)
value = a*x*x + b*x + c
derivative = 2*a*x + b
return value, derivative
# function call:
x = 1
p, dp = quadratic_polynomial(x, 2, 0.5, 1)
p, dp = quadratic_polynomial(x=x, a=-4, b=0.5, c=0)
The sequence of the arguments is important, unless all arguments
are given as name=value
.
Functions may have no arguments and/or no return value(s):
def print_date():
"""Print the current date in the format 'Jan 07, 2007'."""
import time
print time.strftime("%b %d, %Y")
# call:
print_date()
A common error is to forget the parentheses: print_date
is the function object itself, while print_date()
is a call
to the function.
Function arguments with default values are called keyword arguments, and they help to document the meaning of arguments in function calls. They also make it possible to specify just a subset of the arguments in function calls.
from math import exp, sin, pi
def f(x, A=1, a=1, w=pi):
return A*exp(-a*x)*sin(w*x)
f1 = f(0)
x2 = 0.1
f2 = f(x2, w=2*pi)
f3 = f(x2, w=4*pi, A=10, a=0.1)
f4 = f(w=4*pi, A=10, a=0.1, x=x2)
The sequence of the keyword arguments can be arbitrary, and the keyword
arguments that are not listed in the call get their default values according
to the function definition. The non-keyword arguments are called
positional arguments, which is x
in this example. Positional arguments
must be listed before the keyword arguments. However,
also a positional
argument can appear as name=value
in the call (see the last line
above), and this syntax allows any positional argument to be listed anywhere in
the call.
The if-elif-else
tests are used
to branch the flow of statements.
That is, different sets of statements are executed depending on whether
some conditions are True
or False
.
def f(x):
if x < 0:
value = -1
elif x >= 0 and x <= 1:
value = x
else:
value = 1
return value
Assigning a variable one value if a condition is True
and another
value if False
, is compactly done with an inline if
test:
sign = -1 if a < 0 else 1
The important computer science terms in this document are
if
tests with if
, elif
, and else
(branching)None
object
An integral
$$
\begin{equation*}
\int_a^b f(x)dx
\end{equation*}
$$
can be approximated by the so-called Simpson's rule:
$$
\begin{equation}
{b-a\over 3n}\left(
f(a) + f(b) + 4\sum_{i=1}^{n/2} f(a + (2i-1)h) +
2\sum_{i=1}^{n/2-1} f(a+2ih)\right)\tp
\tag{6}
\end{equation}
$$
Here, \( h=(b-a)/n \) and \( n \) must be an even integer. The problem is to
make a function Simpson(f, a, b, n=500)
that returns the right-hand
side formula of (6). To verify the
implementation, one can make use of the fact that Simpson's rule is
exact for all polynomials \( f(x) \) of degree \( \leq 2 \). Apply the
Simpson
function to the integral \( \frac{3}{2}\int_0^\pi\sin^3x dx \),
which has exact value 2, and investigate how the approximation error
varies with \( n \).
The evaluation of the formula (6) in a
program is straightforward if we know how to implement summation
(\( \sum \)) and how to call \( f \). A Python recipe for calculating sums is
given in the section Computing sums. Basically, \( \sum_{i=M}^Nq(i) \),
for some expression \( q(i) \) involving \( i \), is coded with the aid of a
for
loop over \( i \) and an accumulation variable s
for building up
the sum, one term at a time:
s = 0
for i in range(M, N+1):
s += q(i)
The Simpson
function can then be coded as
def Simpson(f, a, b, n=500):
h = (b - a)/float(n)
sum1 = 0
for i in range(1, n/2 + 1):
sum1 += f(a + (2*i-1)*h)
sum2 = 0
for i in range(1, n/2):
sum2 += f(a + 2*i*h)
integral = (b-a)/(3*n)*(f(a) + f(b) + 4*sum1 + 2*sum2)
return integral
Note that Simpson
can integrate any Python function f
of
one variable. Specifically, we can implement
$$
\begin{equation*} h(x) = \frac{3}{2}
\sin^3 xdx\end{equation*}
$$
in a Python function
def h(x):
return (3./2)*sin(x)**3
and call Simpson
to compute \( \int_0^\pi h(x)dx \) for various
choices of \( n \), as requested:
from math import sin, pi
def application():
print 'Integral of 1.5*sin^3 from 0 to pi:'
for n in 2, 6, 12, 100, 500:
approx = Simpson(h, 0, pi, n)
print 'n=%3d, approx=%18.15f, error=%9.2E' % \
(n, approx, 2-approx)
(We have put the statements inside a function, here called application
,
mainly to group them, and not because application
will be called
several times or with different arguments.)
Calling application()
leads to the output
Integral of 1.5*sin^3 from 0 to pi: n= 2, approx= 3.141592653589793, error=-1.14E+00 n= 6, approx= 1.989171700583579, error= 1.08E-02 n= 12, approx= 1.999489233010781, error= 5.11E-04 n=100, approx= 1.999999902476350, error= 9.75E-08 n=500, approx= 1.999999999844138, error= 1.56E-10
We clearly see that the approximation improves as \( n \) increases. However, every computation will give an answer that deviates from the exact value 2. We cannot from this test alone know if the errors above are those implied by the approximation only, or if there are additional programming mistakes.
A much better way of verifying the implementation is therefore to look for
test cases where the numerical approximation formula is exact, such
that we know exactly what the result of the function should be. Since
it is stated that the formula is exact for polynomials up to second
degree, we just test the Simpson
function
on an "arbitrary" parabola, say
$$
\begin{equation*} \int_{3/2}^2 \left(3x^2 - 7x + 2.5\right)dx \tp\end{equation*}
$$
This integral equals \( G(2)-G(3/2) \), where \( G(x)=x^3 - 3.5x^2 + 2.5x \).
A fundamental problem arises if we compare the computed integral value
with the exact result using ==
, because rounding errors may lead to
small differences and hence a false equality test. Consider the simple
problem
>>> 0.1 + 0.2 == 0.3
False
>>> 0.1 + 0.2
0.30000000000000004
We see that 0.1 + 0.2
leads to a small error in the 17th decimal place.
We must therefore make comparisons with tolerances:
>>> tol = 1E-14
>>> abs(0.3 - (0.1 + 0.2)) < tol
True
In this particular example,
>>> abs(0.3 - (0.1 + 0.2))
5.551115123125783e-17
so a tolerance of \( 10^{-16} \) would work, but in algorithms with many more arithmetic operations, rounding errors may accumulate so \( 10^{-15} \) or \( 10^{-14} \) are more appropriate tolerances.
The following implementation of a test function applies a tolerance in the test for equality:
def g(x):
return 3*x**2 - 7*x + 2.5
def G(x):
return x**3 - 3.5*x**2 + 2.5*x
def test_Simpson():
a = 1.5
b = 2.0
n = 8
expact = G(b) - G(a)
approx = Simpson(g, a, b, n)
success = abs(exact - approx) < 1E-14
if not success:
print 'Error: wrong integral of quadratic function'
The g
and G
functions are only of interest inside the
test_Simpson
function. Many think the code becomes easier to read
and understand if g
and G
are moved inside test_Simpson
, which
is indeed possible in Python:
def test_Simpson():
def g(x):
# test function for exact integration by Simpson's rule
return 3*x**2 - 7*x + 2.5
def G(x):
# integral of g(x)
return x**3 - 3.5*x**2 + 2.5*x
a = 1.5
b = 2.0
n = 8
exact = G(b) - G(a)
approx = Simpson(g, a, b, n)
success = abs(exact - approx) < 1E-14
if not success:
print 'Error: cannot integrate a quadratic function exactly'
We shall make it a habit to write functions like test_Simpson
for
verifying implementations.
When testing software for correctness, it is considered good practice to break the software into units and test the behavior of each unit. This is called unit testing. In scientific computing, one unit is often a numerical algorithm, like Simpson's method. The testing of a unit is performed with a dedicated test function.
We may take the test function shown above one step further and adopt the conventions in the pytest and nose testing frameworks for Python code. These conventions say that the test function should
test_
;success
, be True
if a test
passes and be False
if the test fails;msg
;assert success, msg
, which will
abort the program and write out msg
if success
is False
.test_*()
functions, and report how many of the
tests that failed, if we adopt the conventions above.
Our pytest/nose-compatible test function then looks as follows:
def test_Simpson():
"""Test exact integration of quadratic polynomials."""
a = 1.5
b = 2.0
n = 8
g = lambda x: 3*x**2 - 7*x + 2.5 # test integrand
G = lambda x: x**3 - 3.5*x**2 + 2.5*x # integral of g
exact = G(b) - G(a)
approx = Simpson(g, a, b, n)
success = abs(exact - approx) < 1E-14
msg = 'Simpson: %g, exact: %g' % (approx, exact)
assert success, msg
Here we have also made the test function more compact by utilizing
lambda functions for g
and G
(see the section Lambda functions).
Another improvement is to increase the robustness of the function. That is, to check that the input data, i.e., the arguments, are acceptable. Here we may test if \( b>a \) and if \( n \) is an even integer. For the latter test, we make use of the mod function: mod(\( n \), \( d \)) gives the remainder when \( n \) is divided by \( d \) (both \( n \) and \( d \) are integers). Mathematically, if \( p \) is the largest integer such that \( pd\leq n \), then mod(\( n \), \( d \)) is \( n-pd \). For example, mod(3, 2) is 1, mod(3, 1) is 0, mod (3, 3) is 0, and mod(18, 8) is 2. The point is that \( n \) divided by \( d \) is an integer when mod(\( n \), \( d \)) is zero. In Python, the percentage sign is used for the mod function:
>>> 18 % 8
2
To test if n
is an odd integer, we see if it can be divided
by 2 and yield an integer without any reminder: n % 2 == 0
.
The improved Simpson
function with validity tests on the provided
arguments, as well as a doc string (the section Doc strings),
can look like this:
def Simpson(f, a, b, n=500):
"""
Return the approximation of the integral of f
from a to b using Simpson's rule with n intervals.
"""
if a > b:
print 'Error: a=%g > b=%g' % (a, b)
return None
# check that n is even:
if n % 2 != 0:
print 'Error: n=%d is not an even integer!' % n
n = n+1 # make n even
h = (b - a)/float(n)
sum1 = 0
for i in range(1, n/2 + 1):
sum1 += f(a + (2*i-1)*h)
sum2 = 0
for i in range(1, n/2):
sum2 += f(a + 2*i*h)
integral = (b-a)/(3*n)*(f(a) + f(b) + 4*sum1 + 2*sum2)
return integral
The complete code is found in the file Simpson.py.
A very good exercise is to simulate the program flow by hand, starting
with the call to the application
function. The Online
Python Tutor or a debugger (see
the
document Debugging in Python
[1]) are convenient tools for
controlling that your thinking is correct.