
# Rate of convergence

With the methods above, we noticed that the number of iterations or function calls could differ quite substantially. The number of iterations needed to find a solution is closely related to the rate of convergence, which is the speed of the error as we approach the root. More precisely, we introduce the error in iteration $$n$$ as $$e_n=|x-x_n|$$, and define the convergence rate $$q$$ as $$$$e_{n+1} = Ce_n^q, \tag{6.5}$$$$ where $$C$$ is a constant. The exponent $$q$$ measures how fast the error is reduced from one iteration to the next. The larger $$q$$ is, the faster the error goes to zero, and the fewer iterations we need to meet the stopping criterion $$|f(x)| < \epsilon$$.

A single $$q$$ in (6.5) is defined in the limit $$n\rightarrow\infty$$. For finite $$n$$, and especially smaller $$n$$, $$q$$ will vary with $$n$$. To estimate $$q$$, we can compute all the errors $$e_n$$ and set up (6.5) for three consecutive experiments $$n-1$$, $$n$$, and $$n+1$$: \begin{align*} e_{n} &= Ce_{n-1}^q,\\ e_{n+1} &= Ce_n^q\thinspace . \end{align*} Dividing these two equations by each other and solving with respect to $$q$$ gives $$q = \frac{\ln (e_{n+1}/e_n)}{\ln(e_n/e_{n-1})}\thinspace .$$ Since this $$q$$ will vary somewhat with $$n$$, we call it $$q_n$$. As $$n$$ grows, we expect $$q_n$$ to approach a limit ($$q_n\rightarrow q$$). To compute all the $$q_n$$ values, we need all the $$x_n$$ approximations. However, our previous implementations of Newton's method, the secant method, and the bisection method returned just the final approximation.

Therefore, we have extended the implementations in the module file nonlinear_solvers.py such that the user can choose whether the final value or the whole history of solutions is to be returned. Each of the extended implementations now takes an extra parameter return_x_list. This parameter is a boolean, set to True if the function is supposed to return all the root approximations, or False, if the function should only return the final approximation. As an example, let us take a closer look at Newton:

def Newton(f, dfdx, x, eps, return_x_list=False):
f_value = f(x)
iteration_counter = 0
if return_x_list:
x_list = []

while abs(f_value) > eps and iteration_counter < 100:
try:
x = x - float(f_value)/dfdx(x)
except ZeroDivisionError:
print "Error! - derivative zero for x = ", x
sys.exit(1)     # Abort with error

f_value = f(x)
iteration_counter += 1
if return_x_list:
x_list.append(x)

# Here, either a solution is found, or too many iterations
if abs(f_value) > eps:
iteration_counter = -1  # i.e., lack of convergence

if return_x_list:
return x_list, iteration_counter
else:
return x, iteration_counter

The function is found in the file nonlinear_solvers.py.

We can now make a call

x, iter = Newton(f, dfdx, x=1000, eps=1e-6, return_x_list=True)

and get a list x returned. With knowledge of the exact solution $$x$$ of $$f(x)=0$$ we can compute all the errors $$e_n$$ and all the associated $$q_n$$ values with the compact function

def rate(x, x_exact):
e = [abs(x_ - x_exact) for x_ in x]
q = [log(e[n+1]/e[n])/log(e[n]/e[n-1])
for n in range(1, len(e)-1, 1)]
return q

The error model (6.5) works well for Newton's method and the secant method. For the bisection method, however, it works well in the beginning, but not when the solution is approached.

We can compute the rates $$q_n$$ and print them nicely,

def print_rates(method, x, x_exact):
q = ['%.2f' % q_ for q_ in rate(x, x_exact)]
print method + ':'
for q_ in q:
print q_,
print

The result for print_rates('Newton', x, 3) is

Newton:
1.01 1.02 1.03 1.07 1.14 1.27 1.51 1.80 1.97 2.00

indicating that $$q=2$$ is the rate for Newton's method. A similar computation using the secant method, gives the rates

secant:
1.26 0.93 1.05 1.01 1.04 1.05 1.08 1.13 1.20 1.30 1.43
1.54 1.60 1.62 1.62

Here it seems that $$q\approx 1.6$$ is the limit.

Remark. If we in the bisection method think of the length of the current interval containing the solution as the error $$e_n$$, then (6.5) works perfectly since $$e_{n+1}=\frac{1}{2}e_n$$, i.e., $$q=1$$ and $$C=\frac{1}{2}$$, but if $$e_n$$ is the true error $$|x-x_n|$$, it is easily seen from a sketch that this error can oscillate between the current interval length and a potentially very small value as we approach the exact solution. The corresponding rates $$q_n$$ fluctuate widely and are of no interest.