Discretization of 1D stationary nonlinear differential equations

The section Linearization at the differential equation level presented methods for linearizing time-discrete PDEs directly prior to discretization in space. We can alternatively carry out the discretization in space of the time-discrete nonlinear PDE problem and get a system of nonlinear algebraic equations, which can be solved by Picard iteration or Newton’s method as presented in the section Systems of nonlinear algebraic equations. This latter approach will now be described in detail.

We shall work with the 1D problem

\[\tag{47} -({\alpha}(u)u^{\prime})^{\prime} + au = f(u),\quad x\in (0,L), \quad {\alpha}(u(0))u^{\prime}(0) = C,\ u(L)=D {\thinspace .}\]

The problem (47) arises from the stationary limit of a diffusion equation,

\[\tag{48} \frac{\partial u}{\partial t} = \frac{\partial}{\partial x}\left( \alpha(u)\frac{\partial u}{\partial x}\right) - au + f(u),\]

as \(t\rightarrow\infty\) and \(\partial u/\partial t\rightarrow 0\). Alternatively, the problem (47) arises at each time level from implicit time discretization of (48). For example, a Backward Euler scheme for (48) leads to

\[\tag{49} \frac{u^{n}-u^{n-1}}{\Delta t} = \frac{d}{dx}\left( \alpha(u^n)\frac{du^n}{dx}\right) - au^n + f(u^n){\thinspace .}\]

Introducing \(u(x)\) for \(u^n(x)\), \(u^{(1)}\) for \(u^{n-1}\), and defining \(f(u)\) in (47) to be \(f(u)\) in (49) plus \(u^{n-1}/\Delta t\), gives (47) with \(a=1/\Delta t\).

Finite difference discretization

Since the technical steps in finite difference discretization in space are so much simpler than the steps in the finite element method, we start with finite difference to illustrate the concept of handling this nonlinear problem and minimize the spatial discretization details.

The nonlinearity in the differential equation (47) poses no more difficulty than a variable coefficient, as in the term \(({\alpha}(x)u^{\prime})^{\prime}\). We can therefore use a standard finite difference approach to discretizing the Laplace term with a variable coefficient:

\[[-D_x{\alpha} D_x u +au = f]_i{\thinspace .}\]

Writing this out for a uniform mesh with points \(x_i=i\Delta x\), \(i=0,\ldots,N_x\), leads to

\[\tag{50} -\frac{1}{\Delta x^2} \left({\alpha}_{i+\frac{1}{2}}(u_{i+1}-u_i) - {\alpha}_{i-\frac{1}{2}}(u_{i}-u_{i-1})\right) + au_i = f(u_i){\thinspace .}\]

This equation is valid at all the mesh points \(i=0,1,\ldots,N_x-1\). At \(i=N_x\) we have the Dirichlet condition \(u_i=D\). The only difference from the case with \(({\alpha}(x)u^{\prime})^{\prime}\) and \(f(x)\) is that now \({\alpha}\) and \(f\) are functions of \(u\) and not only on \(x\): \(({\alpha}(u(x))u^{\prime})^{\prime}\) and \(f(u(x))\).

The quantity \({\alpha}_{i+\frac{1}{2}}\), evaluated between two mesh points, needs a comment. Since \({\alpha}\) depends on \(u\) and \(u\) is only known at the mesh points, we need to express \({\alpha}_{i+\frac{1}{2}}\) in terms of \(u_i\) and \(u_{i+1}\). For this purpose we use an arithmetic mean, although a harmonic mean is also common in this context if \({\alpha}\) features large jumps. There are two choices of arithmetic means:

\[\begin{split}{\alpha}_{i+\frac{1}{2}} &\approx {\alpha}(\frac{1}{2}(u_i + u_{i+1})) = [{\alpha}(\overline{u}^x)]^{i+\frac{1}{2}},\end{split}\]\[\begin{split}\\ {\alpha}_{i+\frac{1}{2}} &\approx \frac{1}{2}({\alpha}(u_i) + {\alpha}(u_{i+1})) = [\overline{{\alpha}(u)}^x]^{i+\frac{1}{2}}\end{split}\]

Equation (50) with the latter approximation then looks like

\[-\frac{1}{2\Delta x^2} \left(({\alpha}(u_i)+{\alpha}(u_{i+1}))(u_{i+1}-u_i) - ({\alpha}(u_{i-1})+{\alpha}(u_{i}))(u_{i}-u_{i-1})\right)\nonumber\]
\[\tag{51} \qquad\qquad + au_i = f(u_i),\]

or written more compactly,

\[[-D_x\overline{{\alpha}}^x D_x u +au = f]_i{\thinspace .}\]

At mesh point \(i=0\) we have the boundary condition \({\alpha}(u)u^{\prime}=C\), which is discretized by

\[[{\alpha}(u)D_{2x}u = C]_0,\]

meaning

\[\tag{52} {\alpha}(u_0)\frac{u_{1} - u_{-1}}{2\Delta x} = C{\thinspace .}\]

The fictitious value \(u_{-1}\) can be eliminated with the aid of (51) for \(i=0\). Formally, (51) should be solved with respect to \(u_{i-1}\) and that value (for \(i=0\)) should be inserted in (52), but it is algebraically much easier to do it the other way around. Alternatively, one can use a ghost cell \([-\Delta x,0]\) and update the \(u_{-1}\) value in the ghost cell according to (52) after every Picard or Newton iteration. Such an approach means that we use a known \(u_{-1}\) value in (51) from the previous iteration.

Solution of algebraic equations

The structure of the equation system

The nonlinear algebraic equations (51) are of the form \(A(u)u = b(u)\) with

\[\begin{split}A_{i,i} &= \frac{1}{2\Delta x^2}({\alpha}(u_{i-1}) + 2{\alpha}(u_{i}) +{\alpha}(u_{i+1})) + a,\\ A_{i,i-1} &= -\frac{1}{2\Delta x^2}({\alpha}(u_{i-1}) + {\alpha}(u_{i})),\\ A_{i,i+1} &= -\frac{1}{2\Delta x^2}({\alpha}(u_{i}) + {\alpha}(u_{i+1})),\\ b_i &= f(u_i){\thinspace .}\end{split}\]

The matrix \(A(u)\) is tridiagonal: \(A_{i,j}=0\) for \(j > i+1\) and \(j < i-1\).

The above expressions are valid for internal mesh points \(1\leq i\leq N_x-1\). For \(i=0\) we need to express \(u_{i-1}=u_{-1}\) in terms of \(u_1\) using (52):

\[\tag{53} u_{-1} = u_1 -\frac{2\Delta x}{{\alpha}(u_0)}C{\thinspace .}\]

This value must be inserted in \(A_{0,0}\). The expression for \(A_{i,i+1}\) applies for \(i=0\), and \(A_{i,i-1}\) does not enter the system when \(i=0\).

Regarding the last equation, its form depends on whether we include the Dirichlet condition \(u(L)=D\), meaning \(u_{N_x}=D\), in the nonlinear algebraic equation system or not. Suppose we choose \((u_0,u_1,\ldots,u_{N_x-1})\) as unknowns, later referred to as systems without Dirichlet conditions. The last equation corresponds to \(i=N_x-1\). It involves the boundary value \(u_{N_x}\), which is substituted by \(D\). If the unknown vector includes the boundary value, \((u_0,u_1,\ldots,u_{N_x})\), later referred to as system including Dirichlet conditions, the equation for \(i=N_x-1\) just involves the unknown \(u_{N_x}\), and the final equation becomes \(u_{N_x}=D\), corresponding to \(A_{i,i}=1\) and \(b_i=D\) for \(i=N_x\).

Picard iteration

The obvious Picard iteration scheme is to use previously computed values of \(u_i\) in \(A(u)\) and \(b(u)\), as described more in detail in the section Systems of nonlinear algebraic equations. With the notation \(u^{-}\) for the most recently computed value of \(u\), we have the system \(F(u)\approx \hat F(u) = A(u^{-})u - b(u^{-})\), with \(F=(F_0,F_1,\ldots,F_m)\), \(u=(u_0,u_1,\ldots,u_m)\). The index \(m\) is \(N_x\) if the system includes the Dirichlet condition as a separate equation and \(N_x-1\) otherwise. The matrix \(A(u^{-})\) is tridiagonal, so the solution procedure is to fill a tridiagonal matrix data structure and the right-hand side vector with the right numbers and call a Gaussian elimination routine for tridiagonal linear systems.

Mesh with two cells

It helps on the understanding of the details to write out all the mathematics in a specific case with a small mesh, say just two cells (\(N_x=2\)). We use \(u^{-}_i\) for the \(i\)-th component in \(u^{-}\).

The starting point is the basic expressions for the nonlinear equations at mesh point \(i=0\) and \(i=1\) are

\[\tag{54} A_{0,-1}u_{-1} + A_{0,0}u_0 + A_{0,1}u_1 = b_0,\]
\[\tag{55} A_{1,0}u_{0} + A_{1,1}u_1 + A_{1,2}u_2 = b_1{\thinspace .}\]

Equation (54) written out reads

\[\begin{split}\frac{1}{2\Delta x^2}(& -({\alpha}(u_{-1}) + {\alpha}(u_{0}))u_{-1}\, +\\ & ({\alpha}(u_{-1}) + 2{\alpha}(u_{0}) + {\alpha}(u_{1}))u_0\, -\\ & ({\alpha}(u_{0}) + {\alpha}(u_{1})))u_1 + au_0 =f(u_0){\thinspace .}\end{split}\]

We must then replace \(u_{-1}\) by (53). With Picard iteration we get

\[\begin{split}\frac{1}{2\Delta x^2}(& -({\alpha}(u^-_{-1}) + 2{\alpha}(u^-_{0}) + {\alpha}(u^-_{1}))u_1\, +\\ &({\alpha}(u^-_{-1}) + 2{\alpha}(u^-_{0}) + {\alpha}(u^-_{1}))u_0 + au_0\\ &=f(u^-_0) - \frac{1}{{\alpha}(u^-_0)\Delta x}({\alpha}(u^-_{-1}) + {\alpha}(u^-_{0}))C,\end{split}\]

where

\[u^-_{-1} = u_1^- -\frac{2\Delta x}{{\alpha}(u^-_0)}C{\thinspace .}\]

Equation (55) contains the unknown \(u_2\) for which we have a Dirichlet condition. In case we omit the condition as a separate equation, (55) with Picard iteration becomes

\[\begin{split}\frac{1}{2\Delta x^2}(&-({\alpha}(u^-_{0}) + {\alpha}(u^-_{1}))u_{0}\, + \\ &({\alpha}(u^-_{0}) + 2{\alpha}(u^-_{1}) + {\alpha}(u^-_{2}))u_1\, -\\ &({\alpha}(u^-_{1}) + {\alpha}(u^-_{2})))u_2 + au_1 =f(u^-_1){\thinspace .}\end{split}\]

We must now move the \(u_2\) term to the right-hand side and replace all occurrences of \(u_2\) by \(D\):

\[\begin{split}\frac{1}{2\Delta x^2}(&-({\alpha}(u^-_{0}) + {\alpha}(u^-_{1}))u_{0}\, +\\ & ({\alpha}(u^-_{0}) + 2{\alpha}(u^-_{1}) + {\alpha}(D))u_1 + au_1\\ &=f(u^-_1) + \frac{1}{2\Delta x^2}({\alpha}(u^-_{1}) + {\alpha}(D))D{\thinspace .}\end{split}\]

The two equations can be written as a \(2\times 2\) system:

\[\begin{split}\left(\begin{array}{cc} B_{0,0}& B_{0,1}\\ B_{1,0} & B_{1,1} \end{array}\right) \left(\begin{array}{c} u_0\\ u_1 \end{array}\right) = \left(\begin{array}{c} d_0\\ d_1 \end{array}\right),\end{split}\]

where

\[\tag{56} B_{0,0} =\frac{1}{2\Delta x^2}({\alpha}(u^-_{-1}) + 2{\alpha}(u^-_{0}) + {\alpha}(u^-_{1})) + a\]
\[\tag{57} B_{0,1} = -\frac{1}{2\Delta x^2}({\alpha}(u^-_{-1}) + 2{\alpha}(u^-_{0}) + {\alpha}(u^-_{1})),\]
\[\tag{58} B_{1,0} = -\frac{1}{2\Delta x^2}({\alpha}(u^-_{0}) + {\alpha}(u^-_{1})),\]
\[\tag{59} B_{1,1} = \frac{1}{2\Delta x^2}({\alpha}(u^-_{0}) + 2{\alpha}(u^-_{1}) + {\alpha}(D)) + a,\]
\[\tag{60} d_0 = f(u^-_0) - \frac{1}{{\alpha}(u^-_0)\Delta x}({\alpha}(u^-_{-1}) + {\alpha}(u^-_{0}))C,\]
\[\tag{61} d_1 = f(u^-_1) + \frac{1}{2\Delta x^2}({\alpha}(u^-_{1}) + {\alpha}(D))D{\thinspace .}\]

The system with the Dirichlet condition becomes

\[\begin{split}\left(\begin{array}{ccc} B_{0,0}& B_{0,1} & 0\\ B_{1,0} & B_{1,1} & B_{1,2}\\ 0 & 0 & 1 \end{array}\right) \left(\begin{array}{c} u_0\\ u_1\\ u_2 \end{array}\right) = \left(\begin{array}{c} d_0\\ d_1\\ D \end{array}\right),\end{split}\]

with

\[\tag{62} B_{1,1} = \frac{1}{2\Delta x^2}({\alpha}(u^-_{0}) + 2{\alpha}(u^-_{1}) + {\alpha}(u_2)) + a,\]
\[\tag{63} B_{1,2} = - \frac{1}{2\Delta x^2}({\alpha}(u^-_{1}) + {\alpha}(u_2))),\]
\[\tag{64} d_1 = f(u^-_1){\thinspace .}\]

Other entries are as in the \(2\times 2\) system.

Newton’s method

The Jacobian must be derived in order to use Newton’s method. Here it means that we need to differentiate \(F(u)=A(u)u - b(u)\) with respect to the unknown parameters \(u_0,u_1,\ldots,u_m\) (\(m=N_x\) or \(m=N_x-1\), depending on whether the Dirichlet condition is included in the nonlinear system \(F(u)=0\) or not). Nonlinear equation number \(i\) has the structure

\[F_i = A_{i,i-1}(u_{i-1},u_i)u_{i-1} + A_{i,i}(u_{i-1},u_i,u_{i+1})u_i + A_{i,i+1}(u_i, u_{i+1})u_{i+1} - b_i(u_i){\thinspace .}\]

Computing the Jacobian requires careful differentiation. For example,

\[\begin{split}\frac{\partial}{\partial u_i}(A_{i,i}(u_{i-1},u_i,u_{i+1})u_i) &= \frac{\partial A_{i,i}}{\partial u_i}u_i + A_{i,i} \frac{\partial u_i}{\partial u_i}\\ &= \frac{\partial}{\partial u_i}( \frac{1}{2\Delta x^2}({\alpha}(u_{i-1}) + 2{\alpha}(u_{i}) +{\alpha}(u_{i+1})) + a)u_i +\\ &\quad\frac{1}{2\Delta x^2}({\alpha}(u_{i-1}) + 2{\alpha}(u_{i}) +{\alpha}(u_{i+1})) + a\\ &= \frac{1}{2\Delta x^2}(2{\alpha}^\prime (u_i)u_i +{\alpha}(u_{i-1}) + 2{\alpha}(u_{i}) +{\alpha}(u_{i+1})) + a{\thinspace .}\end{split}\]

The complete Jacobian becomes

\[\begin{split}J_{i,i} &= \frac{\partial F_i}{\partial u_i} = \frac{\partial A_{i,i-1}}{\partial u_i}u_{i-1} + \frac{\partial A_{i,i}}{\partial u_i}u_i + A_{i,i} + \frac{\partial A_{i,i+1}}{\partial u_i}u_{i+1} - \frac{\partial b_i}{\partial u_{i}}\\ &= \frac{1}{2\Delta x^2}( -{\alpha}^{\prime}(u_i)u_{i-1} +2{\alpha}^{\prime}(u_i)u_{i} +{\alpha}(u_{i-1}) + 2{\alpha}(u_i) + {\alpha}(u_{i+1})) +\\ &\quad a -\frac{1}{2\Delta x^2}{\alpha}^{\prime}(u_{i})u_{i+1} - b^{\prime}(u_i),\\ J_{i,i-1} &= \frac{\partial F_i}{\partial u_{i-1}} = \frac{\partial A_{i,i-1}}{\partial u_{i-1}}u_{i-1} + A_{i-1,i} + \frac{\partial A_{i,i}}{\partial u_{i-1}}u_i - \frac{\partial b_i}{\partial u_{i-1}}\\ &= \frac{1}{2\Delta x^2}( -{\alpha}^{\prime}(u_{i-1})u_{i-1} - ({\alpha}(u_{i-1}) + {\alpha}(u_i)) + {\alpha}^{\prime}(u_{i-1})u_i),\\ J_{i,i+1} &= \frac{\partial A_{i,i+1}}{\partial u_{i-1}}u_{i+1} + A_{i+1,i} + \frac{\partial A_{i,i}}{\partial u_{i+1}}u_i - \frac{\partial b_i}{\partial u_{i+1}}\\ &=\frac{1}{2\Delta x^2}( -{\alpha}^{\prime}(u_{i+1})u_{i+1} - ({\alpha}(u_{i}) + {\alpha}(u_{i+1})) + {\alpha}^{\prime}(u_{i+1})u_i) {\thinspace .}\end{split}\]

The explicit expression for nonlinear equation number \(i\), \(F_i(u_0,u_1,\ldots)\), arises from moving the \(f(u_i)\) term in (51) to the left-hand side:

\[F_i = -\frac{1}{2\Delta x^2} \left(({\alpha}(u_i)+{\alpha}(u_{i+1}))(u_{i+1}-u_i) - ({\alpha}(u_{i-1})+{\alpha}(u_{i}))(u_{i}-u_{i-1})\right)\nonumber\]
\[\tag{65} \qquad\qquad + au_i - f(u_i) = 0{\thinspace .}\]

At the boundary point \(i=0\), \(u_{-1}\) must be replaced using the formula (53). When the Dirichlet condition at \(i=N_x\) is not a part of the equation system, the last equation \(F_m=0\) for \(m=N_x-1\) involves the quantity \(u_{N_x-1}\) which must be replaced by \(D\). If \(u_{N_x}\) is treated as an unknown in the system, the last equation \(F_m=0\) has \(m=N_x\) and reads

\[F_{N_x}(u_0,\ldots,u_{N_x}) = u_{N_x} - D = 0{\thinspace .}\]

Similar replacement of \(u_{-1}\) and \(u_{N_x}\) must be done in the Jacobian for the first and last row. When \(u_{N_x}\) is included as an unknown, the last row in the Jacobian must help implement the condition \(\delta u_{N_x}=0\), since we assume that \(u\) contains the right Dirichlet value at the beginning of the iteration (\(u_{N_x}=D\)), and then the Newton update should be zero for \(i=0\), i.e., \(\delta u_{N_x}=0\). This also forces the right-hand side to be \(b_i=0\), \(i=N_x\).

We have seen, and can see from the present example, that the linear system in Newton’s method contains all the terms present in the system that arises in the Picard iteration method. The extra terms in Newton’s method can be multiplied by a factor such that it is easy to program one linear system and set this factor to 0 or 1 to generate the Picard or Newton system.

Galerkin-type discretization

For a Galerkin-type discretization, which may be developed into a finite element method, we first need to derive the variational problem. Let \(V\) be an appropriate function space with basis functions \(\left\{ {{\psi}}_i \right\}_{i\in{\mathcal{I}_s}}\). Because of the Dirichlet condition at \(x=L\) we require \({\psi}_i(L)=0\), \(i\in{\mathcal{I}_s}\). The approximate solution is written as \(u = D + \sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j\), where the term \(D\) can be viewed as a boundary function needed to implement the Dirichlet condition \(u(L)=D\).

Using Galerkin’s method, we multiply the differential equation by any \(v\in V\) and integrate terms with second-order derivatives by parts:

\[\int_0^L {\alpha}(u)u^{\prime}v^{\prime}{\, \mathrm{d}x} + \int_0^L auv{\, \mathrm{d}x} = \int_0^L f(u)v{\, \mathrm{d}x} + [{\alpha}(u)u^{\prime}v]_0^L,\quad \forall v\in V{\thinspace .}\]

The Neumann condition at the boundary \(x=0\) is inserted in the boundary term:

\[[{\alpha}(u)u^{\prime}v]_0^L = {\alpha}(u(L))u^{\prime}(L)v(L) - {\alpha}(u(0))u^{\prime}(0)v(0) = 0 - Cv(0)=-Cv(0){\thinspace .}\]

(Recall that since \({\psi}_i(L)=0\), any linear combination \(v\) of the basis functions also vanishes at \(x=L\): \(v(L)=0\).) The variational problem is then: find \(u\in V\) such that

\[\tag{66} \int_0^L {\alpha}(u)u^{\prime}v^{\prime}{\, \mathrm{d}x} + \int_0^L auv{\, \mathrm{d}x} = \int_0^L f(u)v{\, \mathrm{d}x} - Cv(0),\quad \forall v\in V{\thinspace .}\]

To derive the algebraic equations, we note that \(\forall v\in V\) is equivalent with \(v={\psi}_i\) for \(i\in{\mathcal{I}_s}\). Setting \(u=D+\sum_jc_j{\psi}_j\) and sorting terms results in the linear system

\[\sum_{j\in{\mathcal{I}_s}}\left( \int_0^L \left({\alpha}(D+\sum_{k\in{\mathcal{I}_s}}c_k{\psi}_k) {\psi}_j^{\prime}{\psi}_i^{\prime} + a{\psi}_i{\psi}_j\right){\, \mathrm{d}x}\right)c_j \nonumber\]
\[\tag{67} \qquad = \int_0^L f(D+\sum_{k\in{\mathcal{I}_s}}c_k{\psi}_k){\psi}_i{\, \mathrm{d}x} - C{\psi}_i(0),\quad i\in{\mathcal{I}_s}{\thinspace .}\]

Fundamental integration problem

Methods that use the Galerkin or weighted residual method face a fundamental difficulty in nonlinear problems: how can we integrate a terms like \(\int_0^L {\alpha}(\sum_{k}c_k{\psi}_k){\psi}_i^{\prime}{\psi}_j^{\prime}{\, \mathrm{d}x}\) and \(\int_0^L f(\sum_{k}c_k{\psi}_k){\psi}_i{\, \mathrm{d}x}\) when we do not know the \(c_k\) coefficients in the argument of the \({\alpha}\) and \(f\) functions? We can resort to numerical integration, provided an approximate \(\sum_kc_k{\psi}_k\) can be used for the argument \(u\) in \(f\) and \({\alpha}\). This is the approach used in computer programs.

However, if we want to look more mathematically into the structure of the algebraic equations generated by the finite element method in nonlinear problems, and compare such equations with those arising in the finite difference method, we need techniques that enable integration of expressions like \(\int_0^L f(\sum_{k}c_k{\psi}_k){\psi}_i{\, \mathrm{d}x}\) by hand. Two such techniques will be shown: the group finite element and numerical integration based on the nodes only. Both techniques are approximate, but they allow us to see the difference equations in the finite element method. The details are worked out in Appendix Appendix: Symbolic nonlinear finite element equations. Some readers will prefer to dive into these symbolic calculations to gain more understanding of nonlinear finite element equations, while others will prefer to continue with computational algorithms (in the next two sections) rather than analysis.

Picard iteration defined from the variational form

Consider the problem (47) with the corresponding variational form (66). Our aim is to define a Picard iteration based on this variational form without any attempt to compute integrals symbolically as in the previous three sections. The idea in Picard iteration is to use a previously computed \(u\) value in the nonlinear functions \({\alpha}(u)\) and \(f(u)\). Let \(u^{-}\) be the available approximation to \(u\) from the previous Picard iteration. The linearized variational form for Picard iteration is then

\[\tag{68} \int_0^L ({\alpha}(u^{-})u^{\prime}v^{\prime} + auv){\, \mathrm{d}x} = \int_0^L f(u^{-})v{\, \mathrm{d}x} - Cv(0),\quad \forall v\in V{\thinspace .}\]

This is a linear problem \(a(u,v)=L(v)\) with bilinear and linear forms

\[a(u,v) = \int_0^L ({\alpha}(u^{-})u^{\prime}v^{\prime} + auv){\, \mathrm{d}x},\quad L(v) = \int_0^L f(u^{-})v{\, \mathrm{d}x} - Cv(0){\thinspace .}\]

Make sure to distinguish the coefficient \(a\) in \(auv\) from the differential equation from the \(a\) in the abstract bilinear form notation \(a(\cdot,\cdot)\).

The linear system associated with (68) is computed in the standard way. Technically, we are back to solving \(-({\alpha}(x)u^{\prime})^{\prime} + au=f(x)\). The unknown \(u\) is sought on the form \(u = B(x) + \sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j\), with \(B(x)=D\) and \({\psi}_i = {\varphi}_{\nu(i)}\), \(\nu(i)=i\), and \({\mathcal{I}_s} = \{0,1,\ldots,N=N_n-2\}\).

Newton’s method defined from the variational form

Application of Newton’s method to the nonlinear variational form (66) arising from the problem (47) requires identification of the nonlinear algebraic equations \(F_i=0\). Although we originally denoted the unknowns in nonlinear algebraic equations by \(u_0,\ldots,u_N\), it is in the present context most natural to have the unknowns as \(c_0,\ldots,c_N\) and write

\[F_i(c_0,\ldots,c_N)=0, \quad i\in{\mathcal{I}_s},\]

and define the Jacobian as \(J_{i,j}=\partial F_i/\partial c_j\) for \(i,j\in{\mathcal{I}_s}\).

The specific form of the equations \(F_i=0\) follows from the variational form

\[\int_0^L ({\alpha}(u)u^{\prime}v^{\prime} + auv){\, \mathrm{d}x} = \int_0^L f(u)v{\, \mathrm{d}x} - Cv(0),\quad \forall v\in V,\]

by choosing \(v={\psi}_i\), \(i\in{\mathcal{I}_s}\), and setting \(u=\sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j\), maybe with a boundary function to incorporate Dirichlet conditions.

With \(v={\psi}_i\) we get

\[\tag{69} F_i = \int_0^L ({\alpha}(u)u^{\prime}{\psi}_i^{\prime} + au{\psi}_i - f(u){\psi}_i){\, \mathrm{d}x} + C{\psi}_i(0)=0,\quad i\in{\mathcal{I}_s}{\thinspace .}\]

In the differentiations leading to the Jacobian we will frequently use the results

\[\frac{\partial u}{\partial c_j} = \frac{\partial}{\partial c_j} \sum_kc_k{\psi}_k = {\psi}_j,\quad \frac{\partial u^{\prime}}{\partial c_j} = \frac{\partial}{\partial c_j} \sum_kc_k{\psi}_k^{\prime} = {\psi}_j^{\prime}{\thinspace .}\]

The derivation of the Jacobian of (69) goes as

\[J_{i,j} = \frac{\partial F_i}{\partial c_j} = \int_0^L \frac{\partial}{\partial c_j} ({\alpha}(u)u^{\prime}{\psi}_i^{\prime} + au{\psi}_i - f(u){\psi}_i){\, \mathrm{d}x}\nonumber\]
\[= \int_0^L (({\alpha}^{\prime}(u)\frac{\partial u}{\partial c_j}u^{\prime} + {\alpha}(u)\frac{\partial u^{\prime}}{\partial c_j}){\psi}_i^{\prime} + a\frac{\partial u}{\partial c_j}{\psi}_i - f^{\prime}(u)\frac{\partial u}{\partial c_j}{\psi}_i){\, \mathrm{d}x}\nonumber\]
\[= \int_0^L (({\alpha}^{\prime}(u){\psi}_ju^{\prime} + {\alpha}(u){\psi}_j^{\prime}){\psi}_i^{\prime} + a{\psi}_j{\psi}_i - f^{\prime}(u){\psi}_j{\psi}_i){\, \mathrm{d}x}\nonumber\]
\[\tag{70} = \int_0^L ({\alpha}^{\prime}(u)u^{\prime}{\psi}_i^{\prime}{\psi}_j + {\alpha}(u){\psi}_i^{\prime}{\psi}_j^{\prime} + (a - f^{\prime}(u)){\psi}_i{\psi}_j){\, \mathrm{d}x}\]

One must be careful about the prime symbol as differentiation

In \({\alpha}^{\prime}\) the derivative is with respect to the independent variable in the \({\alpha}\) function, and that is \(u\), so

\[\alpha^{\prime} = \frac{d\alpha}{du},\]

while in \(u^{\prime}\) the differentiation is with respect to \(x\), so

\[u^{\prime}=\frac{du}{dx}{\thinspace .}\]

Similarly, \(f\) is a function of \(u\), so \(f^{\prime}\) means \(df/du\).

When calculating the right-hand side vector \(F_i\) and the coefficient matrix \(J_{i,j}\) in the linear system to be solved in each Newton iteration, one must use a previously computed \(u\), denoted by \(u^{-}\), for the symbol \(u\) in (69) and (70). With this notation we have

\[\tag{71} F_i = \int_0^L\left( {\alpha}(u^{-})u^{-\prime}{\psi}_i^{\prime} + (au^{-}-f(u^{-})){\psi}_i\right){\, \mathrm{d}x} - C{\psi}_i(0),\quad i\in{\mathcal{I}_s},\]
\[\tag{72} J_{i,j} = \int_0^L ({\alpha}^{\prime}(u^{-})u^{-\prime}{\psi}_i^{\prime}{\psi}_j + {\alpha}(u^{-}){\psi}_i^{\prime}{\psi}_j^{\prime} + (a - f^{\prime}(u^{-})){\psi}_i{\psi}_j){\, \mathrm{d}x}, \quad i,j\in{\mathcal{I}_s}{\thinspace .}\]

These expressions can be used for any basis \(\left\{ {{\psi}}_i \right\}_{i\in{\mathcal{I}_s}}\). Choosing finite element functions for \({\psi}_i\), one will normally want to compute the integral contribution cell by cell, working in a reference cell. To this end, we restrict the integration to one cell and transform the cell to \([-1,1]\). The most recently computed approximation \(u^{-}\) to \(u\) becomes \(\tilde u^{-}=\sum_t\tilde u^{-1}_t{\tilde{\varphi}}_t(X)\) over the reference element, where \(\tilde u^{-1}_t\) is the value of \(u^{-}\) at global node (or degree of freedom) \(q(e,t)\) corresponding to the local node \(t\) (or degree of freedom). The formulas (71) and (72) then change to

\[\tag{73} \tilde F_r^{(e)} = \int_{-1}^1\left( {\alpha}(\tilde u^{-})\tilde u^{-\prime}{\tilde{\varphi}}_r^{\prime} + (a \tilde u^{-}-f(\tilde u^{-})){\tilde{\varphi}}_r\right)\det J{\, \mathrm{d}X} - C{\tilde{\varphi}}_r(0),\]
\[\tag{74} \tilde J_{r,s}^{(e)} = \int_{-1}^1 ({\alpha}^{\prime}(\tilde u^{-})\tilde u^{-\prime}{\tilde{\varphi}}_r^{\prime}{\tilde{\varphi}}_s + {\alpha}(\tilde u^{-}){\tilde{\varphi}}_r^{\prime}{\tilde{\varphi}}_s^{\prime} + (a - f^{\prime}(\tilde u^{-})){\tilde{\varphi}}_r{\tilde{\varphi}}_s)\det J{\, \mathrm{d}X},\]

with \(r,s\in{I_d}\) runs over the local degrees of freedom.

Many finite element programs require the user to provide \(F_i\) and \(J_{i,j}\). Some programs, like FEniCS, are capable of automatically deriving \(J_{i,j}\) if \(F_i\) is specified.

Dirichlet conditions

Incorporation of the Dirichlet values by assembling contributions from all degrees of freedom and then modifying the linear system can obviously be applied to Picard iteration as that method involves a standard linear system. In the Newton system, however, the unknown is a correction \(\delta u\) to the solution. Dirichlet conditions are implemented by inserting them in the initial guess \(u^{-}\) for the Newton iteration and implementing \(\delta u_i =0\) for all known degrees of freedom. The manipulation of the linear system follows exactly the algorithm in the linear problems, the only difference being that the known values are zero.