$$ \newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\F}{\boldsymbol{F}} \newcommand{\J}{\boldsymbol{J}} \newcommand{\x}{\boldsymbol{x}} \renewcommand{\c}{\boldsymbol{c}} $$

 

 

 

Rate of convergence

With the methods above, we noticed that the number of iterations or function calls could differ quite substantially. The number of iterations needed to find a solution is closely related to the rate of convergence, which is the speed of the error as we approach the root. More precisely, we introduce the error in iteration \( n \) as \( e_n=|x-x_n| \), and define the convergence rate \( q \) as $$ \begin{equation} e_{n+1} = Ce_n^q, \tag{6.5} \end{equation} $$ where \( C \) is a constant. The exponent \( q \) measures how fast the error is reduced from one iteration to the next. The larger \( q \) is, the faster the error goes to zero, and the fewer iterations we need to meet the stopping criterion \( |f(x)| < \epsilon \).

A single \( q \) in (6.5) is defined in the limit \( n\rightarrow\infty \). For finite \( n \), and especially smaller \( n \), \( q \) will vary with \( n \). To estimate \( q \), we can compute all the errors \( e_n \) and set up (6.5) for three consecutive experiments \( n-1 \), \( n \), and \( n+1 \): $$ \begin{align*} e_{n} &= Ce_{n-1}^q,\\ e_{n+1} &= Ce_n^q\thinspace . \end{align*} $$ Dividing these two equations by each other and solving with respect to \( q \) gives $$ q = \frac{\ln (e_{n+1}/e_n)}{\ln(e_n/e_{n-1})}\thinspace .$$ Since this \( q \) will vary somewhat with \( n \), we call it \( q_n \). As \( n \) grows, we expect \( q_n \) to approach a limit (\( q_n\rightarrow q \)). To compute all the \( q_n \) values, we need all the \( x_n \) approximations. However, our previous implementations of Newton's method, the secant method, and the bisection method returned just the final approximation.

Therefore, we have extended those previous implementations such that the user can choose whether the final value or the whole history of solutions is to be returned. The extended implementations are named Newton_solver, secant_solver and bisection_solver. Compared to the previous implementations, each of these now takes an extra parameter return_x_list. This parameter is a boolean, set to true if the function is supposed to return all the root approximations, or false, if the function should only return the final approximation. As an example, let us take a closer look at Newton_solver:

function [sol, no_it] = Newton_solver(f, dfdx, x, eps, return_x_list)
    f_value = f(x);
    iteration_counter = 0;
    if return_x_list
        x_list = [];
    end
    while abs(f_value) > eps && iteration_counter < 100
        try
            x = x - (f_value)/dfdx(x);
        catch
            fprintf('Error! - derivative zero for x = \n', x)
            break
        end
        f_value = f(x);
        iteration_counter = iteration_counter + 1;
        if return_x_list
            x_list = [x_list x];
        end
    end
    % Here, either a solution is found, or too many iterations
    if abs(f_value) > eps
        iteration_counter = -1;   % i.e., lack of convergence
    end
    
    if return_x_list
        sol = x_list;
        no_it = iteration_counter;
    else
        sol = x;
        no_it = iteration_counter;
    end
end

The function is found in the file Newton_solver.m.

We can now make a call

[x, iter] = Newton_solver(f, dfdx, 1000, 1e-6, true);

and get an array x returned. With knowledge of the exact solution \( x \) of \( f(x)=0 \), we can compute all the errors \( e_n \) and associated \( q_n \) values with the compact function

function q = rate(x, x_exact)
    e = abs(x - x_exact);
    q = zeros(length(e)-2,1);
    for n = 2:(length(e)-1)
        q(n-1) = log(e(n+1)/e(n))/log(e(n)/e(n-1));       
    end
end

The error model (6.5) works well for Newton's method and the secant method. For the bisection method, however, it works well in the beginning, but not when the solution is approached.

We can compute the rates \( q_n \) and print them nicely,

function print_rates(method, x, x_exact)
    q = rate(x, x_exact);
    fprintf('%s:\n', method)
    for i = 1:length(q)
        fprintf('%.2f ', q(i));
    end
    fprintf('\n')
end

The result for print_rates('Newton', x, 3) is

Newton:
1.01 1.02 1.03 1.07 1.14 1.27 1.51 1.80 1.97 2.00

indicating that \( q=2 \) is the rate for Newton's method. A similar computation using the secant method, gives the rates

secant:
1.26 0.93 1.05 1.01 1.04 1.05 1.08 1.13 1.20 1.30 1.43
1.54 1.60 1.62 1.62

Here it seems that \( q\approx 1.6 \) is the limit.

Remark. If we in the bisection method think of the length of the current interval containing the solution as the error \( e_n \), then (6.5) works perfectly since \( e_{n+1}=\frac{1}{2}e_n \), i.e., \( q=1 \) and \( C=\frac{1}{2} \), but if \( e_n \) is the true error \( |x-x_n| \), it is easily seen from a sketch that this error can oscillate between the current interval length and a potentially very small value as we approach the exact solution. The corresponding rates \( q_n \) fluctuate widely and are of no interest.