Since \( s(t) \) is a function in mathematics, it is convenient to have it as a function in Python too:
def s(t):
return v0*t + 0.5*a*t**2
v0 = 0.2
a = 4
value = s(3) # Call the function
Note that
def
v0
and a
are global variables, which must be
defined before calling the s(t)
function, otherwise v0
and a
becomes undefined variables in the expression inside the function.
Instead of having v0
and a
as global variables, we may let them be
function arguments:
def s(t, v0, a):
return v0*t + 0.5*a*t**2
value = s(3, 0.2, 4) # Call the function
# More readable call
value = s(t=3, v0=0.2, a=4)
Function arguments may also be declared to have a default value, allowing us to drop the argument in the call if the default value is appropriate. Such arguments are called keyword arguments or default arguments. Here is an example:
def s(t, v0=1, a=1):
return v0*t + 0.5*a*t**2
value = s(3, 0.2, 4) # specify new v0 and a
value = s(3) # rely on v0=1 and a=1
value = s(3, a=2) # rely on v0=1
value = s(3, v0=2) # rely on a=1
value = s(t=3, v0=2, a=2) # specify everything
value = s(a=2, t=3, v0=2) # any sequence allowed
s(t, v0, a)
to an array t
can be done
in two ways:
# Scalar code: work with one element at a time
for i in range(len(t)):
s_values[i] = s(t_values[i], v0, a)
# Vectorized code: apply s to the entire array
s_values = s(t_values, v0, a)
For the last line to work, the function s
must contain statements
that work correctly when t
is an array argument.
The advantage of vectorized code is that we avoid a loop in Python.
Instead, we carry out mathematical operations on entire arrays, e.g.,
v0*t
and a*t**2
. Technically, such (binary) operations are
executed as loops in very fast (compiled) C code.
A function can return more than one value, say \( s(t) \) and the velocity \( s'(t)=v_0 + at \):
def movement(t, v0, a):
s = v0*t + 0.5*a*t**2
v = v0 + a*t
return s, v
s_value, v_value = movement(t=0.2, v0=2, a=4)
When \( n \) values are returned, we list \( n \) variables on the left-hand side in the call.
return s, v
, actually only
one object is returned. The s
and v
values are packed together
in a tuple object (which is very similar to a list).
>>> def f(x):
... return x+1, x+2, x+3
...
>>> r = f(3) # Store all three return values in one object r
>>> print(r)
(4, 5, 6)
>>> type(r) # What type of object is r?
<type 'tuple'>
>>> print(r[1])
5
Tuples are constant lists, so you can index them as lists, but you cannot
change the contents (append
or del
is illegal).
The formula (1) arises from the basic differential equations in kinematics: $$ \begin{align} v = \frac{ds}{dt},\quad s(0)=s_0, \tag{2}\\ a = \frac{dv}{dt},\quad v(0)=v_0\thinspace . \tag{3} \end{align} $$ Given any acceleration \( a(t) \), we can solve for \( s(t) \) through integration. First, we integrate to find \( v(t) \): $$ \int_0^t a(t)dt = \int_0^t \frac{dv}{dt} dt,$$ which gives $$ v(t) = v_0 + \int_0^t a(t)dt\thinspace . $$ Then we integrate again over \( [0,t] \) to find \( s(t) \): $$ \begin{equation} s(t) = s_0 + v_0t + \int_0^t\left( \int_0^t a(t)dt \right) dt\thinspace . \tag{4} \end{equation} $$
Suppose we have some constant acceleration \( a_0 \) in \( [0,t_1] \) and no acceleration thereafter. We find $$ \begin{equation} s(t) = \left\lbrace\begin{array}{ll} s_0 + v_0 t + \frac{1}{2}a_0 t^2,& t\leq t_1\\ s_0 + v_0t_1 + \frac{1}{2}a_0 t_1^2 + a_0t_1(t-t_1),& t> t_1 \end{array}\right. \tag{5} \end{equation} $$
To implement a function like (5), we need to branch into one type of code (formula) if \( t\leq t_1 \) and another type of code (formula) if \( t>t_1 \). This is called branching and the if test is the primary construction to use.
An if test has the structure
if condition:
<statements when condition is True>
else:
<statements when condition is False>
Here, condition
is a boolean expression that evaluates to True
or
False
. For the piecewisely defined function (5)
we would use an if test in the implementation:
if t <= t1:
s = v0*t + 0.5*a0*t**2
else:
s = v0*t + 0.5*a0*t1**2 + a0*t1*(t-t1)
The else
part can be omitted when not needed. Several branches are
also possible:
if condition1:
<statements when condition1 is True>
elif condition2:
<statements when condition1 is False and condition2 is True>
elif condition3:
<statements when condition1 and conditon 2 are False
and condition3 is True>
else:
<statements when condition1/2/3 all are False>
A Python function implementing the mathematical function (5) reads
def s_func(t, v0, a0, t1):
if t <= t1:
s = v0*t + 0.5*a0*t**2
else:
s = v0*t + 0.5*a0*t1**2 + a0*t1*(t-t1)
return s
To plot this \( s(t) \), we need to compute points on the curve.
Trying to call s_func
with an array t
as argument will not
work because of the if test. In general, functions with if tests
will not automatically work with arrays because a test
like if t <= t1
evaluates to an if applied to a boolean
array (t <= t1
becomes a boolean array, not just a boolean).
One solution is to compute the function values one by one in a loop
such that the s
function is always called with a scalar value for t
.
Appropriate code is
n = 201 # No of t values for plotting
t1 = 1.5
t = np.linspace(0, 2, n+1)
s = np.zeros(n+1)
for i in range(len(t)):
s[i] = s_func(t=t[i], v0=0.2, a0=20, t1=t1)
Relevant statements for plotting are now
plt.plot(t, s, 'b-')
plt.plot([t1, t1], [0, s_func(t=t1, v0=0.2, a0=20, t1=t1)], 'r--')
plt.xlabel('t')
plt.ylabel('s')
plt.savefig('myplot.png')
plt.show()
s_func
is called for each t[i]
element,
we must completely rewrite the if test. There are two methods:
the numpy.where
function and array indexing.
where
function A vectorized if-else test can be coded as
s = np.where(condition, s1, s2)
Here, condition
is an array of boolean values, and
s[i] = s1[i]
if condition[i]
is True
, and otherwise s[i] = s2[i]
.
Our example then becomes
s = np.where(t <= t1,
v0*t + 0.5*a0*t**2,
v0*t + 0.5*a0*t1**2 + a0*t1*(t-t1))
Note that t <= t1
with array t
and scalar t1
results in a boolean
array b
where b[i] = t[i] <= t1
.
It is possible to index a subset of indices in an array s
using
a boolean array b
: s[b]
. This construction picks out all the
elements s[i]
where b[i]
is True
. On the right-hand side
we can then assign some array expression expr
of the same length
as s[b]
:
s[b] = (expr)[b]
Our example can utilize this technique with b
as t <= t1
and t > t1
:
s = np.zeros_like(t) # Make s as zeros, same size & type as t
s[t <= t1] = (v0*t + 0.5*a0*t**2)[t <= t1]
s[t > t1] = (v0*t + 0.5*a0*t1**2 + a0*t1*(t-t1))[t > t1]
Arrays are usually large and most array libraries have carefully designed functionality for avoiding unnecessary copying of large of amounts of data. If you do a simple assignment,
a = np.linspace(1, 5, 5)
b = a
b
becomes a just view of a
: the variables b
and a
point
to the same data. In other languages one may say that b
is a
pointer or reference to the array. This means that changing a
changes
b
and vice versa:
array([ 1., 2., 3., 4., 5.])
>>> b[0] = 5 # changes a[0] to 5
>>> a
array([ 5., 2., 3., 4., 5.])
>>> a[1] = 9 # changes b[1] to 9
>>> b
array([ 5., 9., 3., 4., 5.])
Similarly, b[1:-1]
gives a view to a subset of a
and no copy of data.
If we want to change b
without affecting a
, a copy of the array
is required:
>>> c = a.copy() # copy all elements to new array c
>>> c[0] = 6 # a is not changed
>>> a
array([ 1., 2., 3., 4., 5.])
>>> c
array([ 6., 2., 3., 4., 5.])
>>> b
array([ 5., 2., 3., 4., 5.])
Note that b
remains unchanged by the change in c
since the b
and c
variables now refer to different data. Copying of the elements of a sub-array
is also done by the copy()
method: b = a[1:-1].copy()
.
a = np.array([-1, 4.5, 8, 9])
b = a[1:-2] # view: changes in a are reflected in b
b = a[1:-2].copy() # copy: changes in a are not reflected in b
Solving linear systems of the form \( Ax=b \) with a matrix \( A \) and vectors \( x \) and \( b \) is a frequently encountered task. A sample \( 2\times 2 \) system can be coded as
import numpy as np
A = np.array([[1., 2],
[4, 2]])
b = np.array([1., -2])
x = np.linalg.solve(A, b)
Many scientific computing problems involves large matrices with special
sparse structures. Significant memory and computing time can be
saved by utilizing sparse matrix data structures and associated
algorithms.
Python has sparse matrix package scipy.sparse that can be used to construct
various types of sparse matrices. The particular
function scipy.sparse.spdiags
can construct a sparse matrix from
a few vectors representing the diagonals in the matrix
(all the diagonals must have the same length as the dimension of their
sparse matrix, consequently some elements of the diagonals are not used:
the first \( k \) elements are not used of the \( k \) super-diagonal, and
the last \( k \) elements are not used of the \( -k \) sub-diagonal).
Here is an example of constructing a tridiagonal matrix:
>>> import numpy as np
>>> N = 6
>>> diagonals = np.zeros((3, N)) # 3 diagonals
diagonals[0,:] = np.linspace(-1, -N, N)
diagonals[1,:] = -2
diagonals[2,:] = np.linspace(1, N, N)
>>> import scipy.sparse
>>> A = scipy.sparse.spdiags(diagonals, [-1,0,1], N, N, format='csc')
>>> A.toarray() # look at corresponding dense matrix
[[-2. 2. 0. 0. 0. 0.]
[-1. -2. 3. 0. 0. 0.]
[ 0. -2. -2. 4. 0. 0.]
[ 0. 0. -3. -2. 5. 0.]
[ 0. 0. 0. -4. -2. 6.]
[ 0. 0. 0. 0. -5. -2.]]
Solving a linear system with a tridiagonal coefficient matrix requires
a special function with a special algorithm to take advantage of
the matrix' structure. The next example chooses a solution x
,
computes the corresponding right-hand side \( b=Ax \) using the
sparse matrix vector product associated with the sparse matrix \( A \),
and then solves \( Ax=b \):
>>> x = np.linspace(-1, 1, N) # choose solution
>>> b = A.dot(x) # sparse matrix vector product
>>> import scipy.sparse.linalg
>>> x = scipy.sparse.linalg.spsolve(A, b)
>>> print(x)
[-1. -0.6 -0.2 0.2 0.6 1. ]
Although we can easily check that x
is correctly computed, we can
do another check by converting the linear system to its dense
counterpart:
>>> A_d = A.toarray() # corresponding dense matrix
>>> b = np.dot(A_d, x) # standard matrix vector product
>>> x = np.linalg.solve(A_d, b) # standard Ax=b algorithm
>>> print(x)
[-1. -0.6 -0.2 0.2 0.6 1. ]