$$ \newcommand{\uex}{{u_{\small\mbox{e}}}} \newcommand{\uexd}[1]{{u_{\small\mbox{e}, #1}}} \newcommand{\vex}{{v_{\small\mbox{e}}}} \newcommand{\vexd}[1]{{v_{\small\mbox{e}, #1}}} \newcommand{\Aex}{{A_{\small\mbox{e}}}} \newcommand{\half}{\frac{1}{2}} \newcommand{\halfi}{{1/2}} \newcommand{\tp}{\thinspace .} \newcommand{\Ddt}[1]{\frac{D #1}{dt}} \newcommand{\E}[1]{\hbox{E}\lbrack #1 \rbrack} \newcommand{\Var}[1]{\hbox{Var}\lbrack #1 \rbrack} \newcommand{\Std}[1]{\hbox{Std}\lbrack #1 \rbrack} \newcommand{\xpoint}{\boldsymbol{x}} \newcommand{\normalvec}{\boldsymbol{n}} \newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\x}{\boldsymbol{x}} \newcommand{\X}{\boldsymbol{X}} \renewcommand{\u}{\boldsymbol{u}} \renewcommand{\v}{\boldsymbol{v}} \newcommand{\w}{\boldsymbol{w}} \newcommand{\V}{\boldsymbol{V}} \newcommand{\e}{\boldsymbol{e}} \newcommand{\f}{\boldsymbol{f}} \newcommand{\F}{\boldsymbol{F}} \newcommand{\stress}{\boldsymbol{\sigma}} \newcommand{\strain}{\boldsymbol{\varepsilon}} \newcommand{\stressc}{{\sigma}} \newcommand{\strainc}{{\varepsilon}} \newcommand{\I}{\boldsymbol{I}} \newcommand{\T}{\boldsymbol{T}} \newcommand{\dfc}{\alpha} % diffusion coefficient \newcommand{\ii}{\boldsymbol{i}} \newcommand{\jj}{\boldsymbol{j}} \newcommand{\kk}{\boldsymbol{k}} \newcommand{\ir}{\boldsymbol{i}_r} \newcommand{\ith}{\boldsymbol{i}_{\theta}} \newcommand{\iz}{\boldsymbol{i}_z} \newcommand{\Ix}{\mathcal{I}_x} \newcommand{\Iy}{\mathcal{I}_y} \newcommand{\Iz}{\mathcal{I}_z} \newcommand{\It}{\mathcal{I}_t} \newcommand{\If}{\mathcal{I}_s} % for FEM \newcommand{\Ifd}{{I_d}} % for FEM \newcommand{\Ifb}{{I_b}} % for FEM \newcommand{\setb}[1]{#1^0} % set begin \newcommand{\sete}[1]{#1^{-1}} % set end \newcommand{\setl}[1]{#1^-} \newcommand{\setr}[1]{#1^+} \newcommand{\seti}[1]{#1^i} \newcommand{\sequencei}[1]{\left\{ {#1}_i \right\}_{i\in\If}} \newcommand{\basphi}{\varphi} \newcommand{\baspsi}{\psi} \newcommand{\refphi}{\tilde\basphi} \newcommand{\psib}{\boldsymbol{\psi}} \newcommand{\sinL}[1]{\sin\left((#1+1)\pi\frac{x}{L}\right)} \newcommand{\xno}[1]{x_{#1}} \newcommand{\Xno}[1]{X_{(#1)}} \newcommand{\yno}[1]{y_{#1}} \newcommand{\Yno}[1]{Y_{(#1)}} \newcommand{\xdno}[1]{\boldsymbol{x}_{#1}} \newcommand{\dX}{\, \mathrm{d}X} \newcommand{\dx}{\, \mathrm{d}x} \newcommand{\ds}{\, \mathrm{d}s} \newcommand{\Real}{\mathbb{R}} \newcommand{\Integerp}{\mathbb{N}} \newcommand{\Integer}{\mathbb{Z}} $$ previous next

Discretization of nonlinear differential equations

the section Linearization at the differential equation level presents methods for linearizing time-discrete PDEs directly prior to discretization in space. We can alternatively carry out the discretization in space and of the time-discrete nonlinear PDE problem and get a system of nonlinear algebraic equations, which can be solved by Picard iteration or Newton's method as presented in the section Systems of nonlinear algebraic equations. This latter approach will now be described in detail.

We shall work with the 1D problem

$$ \begin{equation} -(\dfc(u)u')' + au = f(u),\quad x\in (0,L), \quad \dfc(u(0))u'(0) = C,\ u(L)=0 \tp \tag{23} \end{equation} $$ This problem is of the same nature as those arising from implicit time integration of a nonlinear diffusion PDE as outlined in the section Picard iteration (set \( a=1/\Delta t \) and let \( f(u) \) incorporate the nonlinear source term as well as known terms with the time-dependent unknown function at the previous time level).

Finite difference discretizations

Discretization

The nonlinearity in the differential equation (23) poses no more difficulty than a variable coefficient, as in \( (\dfc(x)u')' \). We can therefore use a standard approach to discretizing the Laplace term with a variable coefficient:

$$ [-D_x\dfc D_x u +au = f]_i\tp$$ Writing this out for a uniform mesh with points \( x_i=i\Delta x \), \( i=0,\ldots,N_x \), leads to

$$ \begin{align} -\frac{1}{\Delta x^2} \left(\dfc_{i+\half}(u_{i+1}-u_i) - \dfc_{i-\half}(u_{i}-u_{i-1})\right) + au_i &= f(u_i)\tp \tag{24} \end{align} $$ This equation is valid at all the mesh points \( i=0,1,\ldots,N_x-1 \). At \( i=N_x \) we have the Dirichlet condition \( u_i=0 \). The only difference from the case with \( (\dfc(x)u')' \) and \( f(x) \) is that now \( \dfc \) and \( f \) are functions of \( u \) and not only on \( x \): \( (\dfc(u(x))u')' \) and \( f(u(x)) \).

The quantity \( \dfc_{i+\half} \), evaluated between two mesh points, needs a comment. Since \( \dfc \) depends on \( u \) and \( u \) is only known at the mesh points, we need to express \( \dfc_{i+\half} \) in terms of \( u_i \) and \( u_{i+1} \). For this purpose we use an arithmetic mean, although a harmonic mean is also common in this context if \( \dfc \) features large jumps. There are two choices of arithmetic means:

$$ \begin{align} \dfc_{i+\half} &\approx \dfc(\half(u_i + u_{i+1}) = [\dfc(\overline{u}^x)]^{i+\half}, \tag{25} \\ \dfc_{i+\half} &\approx \half(\dfc(u_i) + \dfc(u_{i+1})) = [\overline{\dfc(u)}^x]^{i+\half} \tag{26} \end{align} $$ Equation (24) with the latter approximation then looks like

$$ \begin{align} &-\frac{1}{2\Delta x^2} \left((\dfc(u_i)+\dfc(u_{i+1}))(u_{i+1}-u_i) - (\dfc(u_{i-1})+\dfc(u_{i}))(u_{i}-u_{i-1})\right)\nonumber\\ &\qquad\qquad + au_i = f(u_i), \tag{27} \end{align} $$ or written more compactly,

$$ [-D_x\overline{\dfc}^x D_x u +au = f]_i\tp$$

At mesh point \( i=0 \) we have the boundary condition \( \dfc(u)u'=C \), which is discretized by

$$ [\dfc(u)D_{2x}u = C]_0,$$ meaning

$$ \begin{equation} \dfc(u_0)\frac{u_{1} - u_{-1}}{2\Delta x} = C\tp \tag{28} \end{equation} $$ The fictitious value \( u_{-1} \) can be eliminated with the aid of (27) for \( i=0 \). Formally, (27) should be solved with respect to \( u_{i-1} \) and that value (for \( i=0 \)) should be inserted in (28), but it is algebraically much easier to do it the other way around. Alternatively, one can use a ghost cell \( [-\Delta x,0] \) and update the \( u_{-1} \) value in the ghost cell according to (28) after every Picard or Newton iteration. Such an approach means that we use a known \( u_{-1} \) value in (27) from the previous iteration.

Solution of algebraic equations

The nonlinear algebraic equations (27) are of the form \( A(u)u = b(u) \) with

$$ \begin{align*} A_{i,i} &= \frac{1}{2\Delta x^2}(-\dfc(u_{i-1}) + 2\dfc(u_{i}) -\dfc(u_{i+1})) + a,\\ A_{i,i-1} &= -\frac{1}{2\Delta x^2}(\dfc(u_{i-1}) + \dfc(u_{i})),\\ A_{i,i+1} &= -\frac{1}{2\Delta x^2}(\dfc(u_{i}) + \dfc(u_{i+1})),\\ b_i &= f(u_i)\tp \end{align*} $$ The matrix \( A(u) \) is tridiagonal: \( A_{i,j}=0 \) for \( j>1+1 \) and \( jSystems of nonlinear algebraic equations.

Newton's method requires computation of the Jacobian. Here it means that we need to differentiate \( F(u)=A(u)u - b(u) \) with respect to \( u_0,u_1,\ldots,u_{N_x-1} \). Nonlinear equation number \( i \) has the structure

$$ F_i = A_{i,i-1}(u_{i-1},u_i)u_{i-1} + A_{i,i}(u_{i-1},u_i,u_{i+1})u_i + A_{i,i+1}(u_i, u_{i+1})u_{i+1} - b_i(u_i)\tp$$ The Jacobian becomes

$$ \begin{align*} J_{i,i} &= \frac{\partial F_i}{\partial u_i} = \frac{\partial A_{i,i-1}}{\partial u_i}u_{i-1} + \frac{\partial A_{i,i}}{\partial u_i}u_i - \frac{\partial b_i}{\partial u_i} + A_{i,i} + \frac{\partial A_{i,i+1}}{\partial u_i}u_{i+1} - \frac{\partial b_i}{\partial u_{i}}\\ &= \frac{1}{2\Delta x^2}( -\dfc'(u_i)u_{i-1} +2\dfc'(u_i)u_{i} +(-\dfc(u_{i-1}) + 2\dfc(u_i) - \dfc(u_{i+1}))) +\\ &\quad a -\frac{1}{2\Delta x^2}\dfc'(u_{i})u_{i+1}) - b'(u_i),\\ J_{i,i-1} &= \frac{\partial F_i}{\partial u_{i-1}} = \frac{\partial A_{i,i-1}}{\partial u_{i-1}}u_{i-1} + A_{i-1,i} + \frac{\partial A_{i,i}}{\partial u_{i-1}}u_i - \frac{\partial b_i}{\partial u_{i-1}}\\ &= \frac{1}{2\Delta x^2}( -\dfc'(u_{i-1})u_{i-1} - (\dfc(u_{i-1}) + \dfc(u_i)) + \dfc'(u_{i-1})u_i),\\ J_{i,i+1} &= \frac{\partial A_{i,i+1}}{\partial u_{i-1}}u_{i+1} + A_{i+1,i} + \frac{\partial A_{i,i}}{\partial u_{i+1}}u_i - \frac{\partial b_i}{\partial u_{i+1}}\\ &=\frac{1}{2\Delta x^2}( -\dfc'(u_{i+1})u_{i+1} - (\dfc(u_{i}) + \dfc(u_{i+1})) + \dfc'(u_{i+1})u_i)\tp \tp \end{align*} $$ The explicit expression for nonlinear equation number \( i \), \( F_i(u_0,u_1,\ldots) \), arises from moving all terms in (27) to the left-hand side. Then we have \( J_{i,j} \) and \( F_i \) (modulo the boundary conditions) and can implement Newton's method.

We have seen, and can see from the present example, that the linear system in Newton's method contains all the terms present in the system that arises in the Picard iteration method. The extra terms in Newton's method can be multiplied by a factor such that it is easy to program one linear system and set this factor to 0 or 1 to generate the Picard or Newton system.

Finite element discretizations

For the finite element discretization we first need to derive the variational problem. Let \( V \) be an appropriate function space with basis functions \( \sequencei{\baspsi} \). Because of the Dirichlet condition at \( x=L \) we require \( \baspsi_i(L)=0 \), \( i\in\If \). Using Galerkin's method, we multiply the differential equation by any \( v\in V \) and integrate terms with second-order derivatives by parts:

$$ \int_0^L \dfc(u)u'v'\dx + \int_0^L auv\dx = \int_0^L f(u)v\dx + [\dfc(u)u'v]_0^L,\quad \forall v\in V\tp $$ The Neumann condition at the boundary \( x=0 \) is inserted in the boundary term:

$$ [\dfc(u)u'v]_0^L = \dfc(u(L))u'(L)v(L) - \dfc(u(0))u'(0)v(0) = 0 - Cv(0)=-Cv(0)\tp \] The variational problem is then: find $u\in V$ such that $$ \begin{equation} \int_0^L \dfc(u)u'v'\dx + \int_0^L auv\dx = \int_0^L f(u)v\dx - Cv(0),\quad \forall v\in V\tp \tag{29} \end{equation} $$ To derive the algebraic equations we also demand the above equations to hold for \( v=\baspsi_i \), \( i\in\If \), and we set \( u=\sum_{j\in\If}c_j\baspsi_j \). The result is

$$ \begin{equation} \sum_{j\in\If}\left( \int_0^L \dfc(\sum_{k\in\If}c_k\baspsi_k) \baspsi_j'\baspsi_i'\dx\right)c_j = \int_0^L f(\sum_{k\in\If}c_k\baspsi_k)\baspsi_i\dx - C\baspsi_i(0),\quad i\in\If\tp \end{equation} $$

Remark

Fundamental integration problem Methods that use the Galerkin or weighted residual principle face a fundamental difficulty in nonlinear problems: how can we integrate a terms like \( \int_0^L \dfc(\sum_{k}c_k\baspsi_k)\baspsi_i'\baspsi_j'\dx \) and \( \int_0^L f(\sum_{k}c_k\baspsi_k)\baspsi_i\dx \) when we do not know the \( c_k \) coefficients in the argument of the \( \dfc \) function? We can resort to numerical integration, provided an approximate \( \sum_kc_k\baspsi_k \) can be used for the argument \( u \) in \( f \) and \( \dfc \). If we want to derive the structure of the nonlinear algebraic equations, we need to apply numerical integration based on the nodes only and/or the group finite element method.

The group finite element method

Finite element notation

Let us simplify the model problem for a while and set \( a=0 \), \( \dfc=1 \), \( f(u)=u^2 \), and have Dirichlet conditions at both ends such that we get a very simple nonlinear problem \( -u''=u^2 \). The variational form is then

$$ \int_0^L u'v'\dx = \int_0^L u^2v\dx,\quad\forall v\in V\tp$$ The term with \( u'v' \) is well known so the only new feature is the term \( \int u^2v\dx \).

Introduction of finite element basis functions \( \basphi_i \) means setting

$$ \baspsi_i = \basphi_{\nu(i)},\quad i\in\If,$$ where degree of freedom number \( \nu(j) \) in the mesh corresponds to unknown number \( j \). When the degrees of freedom are just the function values at nodes, we have that \( c_j=u(x_{\nu(j)})=u_{\nu(j)} \), i.e., the value of \( u \) at node number \( \nu(j) \). The finite element expansion for \( u \) is now

$$ u = \sum_{j\in\Ifb} U_j\basphi_j + \sum_{j\in\If}\basphi_{\nu(j)}u_{\nu(j)},$$ with the \( U_j \) quantities being prescribed Dirichlet values at some nodes with numbers in the index \( \Ifb \). Instead of the \( \nu(j) \) indices in the sum \( \sum_{j\in\If}\basphi_{\nu(j)}u_{\nu(j)} \), we just write \( \sum_{j}\basphi_ju_j \). This is possible by saying that \( j \) runs over a transformed index set: \( \{\nu(0),\nu(1),\ldots,\nu(N)\} \). In the following we drop the boundary term \( \sum_j U_j\basphi_j \) and write \( u = \sum_j\basphi_j u_j \). The replacement of \( c_j \) by \( u_j \) as explained is motivated by simpler interpretation of the nonlinear algebraic equations as a finite difference scheme.

Integrating nonlinear functions

Consider the term \( \int u^2v\dx \) in the variational formulation with \( v=\basphi_i \) and \( u=\sum_k\basphi_ku_k \):

$$ \int_0^L (\sum_ku_k\basphi_k)^2\basphi_i\dx\tp$$ Evaluating this integral for P1 elements (see Problem 9: Integrate functions of finite element expansions) results in the expression

$$ \frac{h}{12}(u_{i-1}^2 + 2u_i(u_{i-1} + u_{i+1}) + 6u_i^2 + u_{i+1}^2,$$ to be compared with the simple value \( u_i^2 \) that would arise in a finite difference discretization. More complicated \( f(u) \) functions give rise to much more lengthy expressions, if it is possible to carry out the integral symbolically.

Finite element approximation of functions of \( u \)

Since we already expand \( u \) as \( \sum_j\basphi_ju_j \) we may use the same approximation for nonlinearities. That is, any function can be expanded as a sum of basis functions times the function values. In particular,

$$ f(u)\approx \sum_{j\in\Ifb} \basphi_jf(u_j) + \sum_{j} \baspsi_{j}(x)f(u_j), $$ where the first sum contain \( f \) values at the boundary where \( u \) has Dirichlet conditions and the other sum is over the node values \( j \) where \( u \) is unknown. However, for \( f \) there is no reason two have two summations as we do not need to distinguish between the nodes where \( u \) are known or unknown. Therefore, we can collapse the two sums into one (over all nodes, \( j=0,\ldots,N_n \)) and write

$$ \begin{equation} f(u) \approx \sum_{j=0}^{N_n} \basphi_j(x)f(u_j)\tp \end{equation} $$ This approximation is known as the group finite element method or the product approximation technique.

The principal advantage of the group finite element method is for deriving the symbolic form of difference equations in nonlinear problems. The symbolic form is useful for comparing finite element and finite difference equations of nonlinear differential equation problems. Computer programs will always integrate \( \int f(u)\basphi_i\dx \) numerically by using an existing approximation of \( u \) in \( f(u) \) such that the integrand can be sampled at any spatial point.

Application

Let use the group finite element method to derive the terms in the difference equation corresponding to \( f(u) \) in the differential equation. We have

$$ \int_0^L f(u)\basphi_i\dx \approx \int_0^L (\sum_j \basphi_jf(u_j))\basphi_i\dx = \sum_j \left(\int_0^L \basphi_i\basphi_j\dx\right) f(u_j)\tp$$ We recognize this expression as the mass matrix \( M \), arising from \( \int\basphi_i\basphi_j\dx \), times the vector \( f=(f(u_0),f(u_1),\ldots,) \): \( Mf \). The associated terms in the difference equations are

$$ \frac{h}{6}(f(u_{i-1}) + 4f(u_i) + f(u_{i+1}))\tp$$ Occasionally, we want to interpret this expression in terms of finite differences and then a rewrite of this expression is convenient:

$$ \frac{h}{6}(f(u_{i-1}) + 4f(u_i) + f(u_{i+1})) = h[f(u) - \frac{h^2}{6}D_xD_x f(u)]_i\tp$$ We may lump the mass matrix through integration with the Trapezoidal rule. In that case the \( f(u) \) term in the differential equation gives rise to a single term \( hf(u_i) \), just as in the finite difference method.

Numerical integration of nonlinear terms

Let us reconsider a term \( \int f(u)v\dx \) as treated in the previous section, but now we want to integrate this term numerically. Such an approach can lead to easy-to-interpret formulas if we apply a numerical integration rule that samples the integrand at the node points.

The term in question takes the form

$$ \int_0^L f(\sum_k u_k\basphi_k)\basphi_i\dx\tp$$ Evaluation of the integrand at a node \( \xno{\ell} \) leads to a collapse of the sum \( \sum_k u_k\basphi_k \) to one term because

$$ \sum_k u_k\basphi_k(\xno{\ell}) = u_\ell\tp$$

$$ f(\sum_k u_k\underbrace{\basphi_k(\xno{\ell})}_{\delta_{k\ell}}) \underbrace{\basphi_i(\xno{\ell})}_{\delta_{i\ell}} = f(u_\ell)\delta_{i\ell},$$ where we have used the Kronecker delta \( \delta_{ij}=0 \) if \( i\neq j \) and \( \delta_{ij}=1 \) if \( i=j \).

Considering the Trapezoidal rule for integration, we have

$$ \int_0^L f(\sum_k u_k\basphi_k)(x)\basphi_i(x)\dx \approx h\sum_{\ell=0}^{N_n} f(u_\ell)\delta_{i\ell} - \mathcal{C}\\ = hf(u_i)\tp $$ The term \( \mathcal{C} \) contains the evaluations of the integrand at the ends with weight \( \half \), needed to make a true Trapezoidal rule. The answer \( hf(u_i) \) must therefore be multiplied by \( \half \) if \( i=0 \) or \( i=N_n \). (\( \mathcal{C} = \frac{h}{2}f(u_0)\basphi_i(0) + \frac{h}{2}f(u_{N_n})\basphi_i(L) \).)

One can easily use the Trapezoidal rule on the reference cell and assemble the contributions. It is a bit more work in this context, but working on the reference cell is safer as that approach is guaranteed to handle discontinuous derivatives of finite element functions correctly.

The conclusion is that it suffices to use the Trapezoidal rule if one wants to derive the difference equations in the finite element method and make them similar to those arising in the finite difference method. The Trapezoidal rule has sufficient accuracy for P1 elements, but for P2 elements one should turn to Simpson's rule.

Finite element discretization of a variable coefficient Laplace term

Turning back to the model problem (23), it remains to calculate the contribution of the \( (\dfc u')' \) and boundary terms to the difference equations. The integral in the variational form corresponding to \( (\dfc u')' \) is

$$ \int_0^L \dfc(\sum_k c_k\baspsi_k)\baspsi_i'\baspsi_j'\dx\tp$$ Numerical integration utilizing a value of \( \sum_k c_k\baspsi_k \) from a previous iteration must in general be used to compute the integral. Now our aim is to integrate symbolically, as much as we can, to obtain some insight into how the finite element method approximates this term.

To be able to derive symbolic expressions, we either turn to the group finite element method or numerical integration in the node points. Finite element basis functions \( \basphi_i \) are used, we set \( \dfc(u)\approx \sum_k\alpha(u_k)\basphi_k \), and then we write

$$ \int_0^L \dfc(\sum_k c_k\basphi_k)\basphi_i'\basphi_j'\dx \approx \sum_k (\underbrace{\int_0^L \basphi_k\basphi_i'\basphi_j'\dx}_{L_{i,j,k}}) \dfc(u_k) = \sum_k L_{i,j,k}\dfc(u_k)\tp $$ Further calculations are now easiest to carry out in the reference cell. With P1 elements we can compute \( L_{i,j,k} \) for the two \( k \) values that are relevant on the reference cell. Turning to local indices, one gets

$$ L_{r,s,t}^{(e)} = \frac{1}{2h}\left(\begin{array}{rr} 1 & -1\\ -1 & 1 \end{array}\right),\quad t=0, 1, $$ where \( r,s,t=0,1 \) are indices over local degrees of freedom in the reference cell (\( i=q(e,r) \), \( j=q(e,s) \), and \( k=q(e,t) \)). The sum \( \sum_k L_{i,j,k}\dfc(u_k) \) at the cell level becomes \( \sum_{t=0}^1 L_{r,s,t}^{(e)}\dfc(\tilde u_t) \), where \( \tilde u_t \) is \( u(\xno{q(e,t)}) \), i.e., the value of \( u \) at local node number \( t \) in cell number \( e \). The element matrix becomes

$$ \begin{equation} \half (\dfc(\tilde u_0) + \dfc(\tilde u_1)) \frac{1}{h}\left(\begin{array}{rr} 1 & -1\\ -1 & 1 \end{array}\right)\tp \tag{30} \end{equation} $$ As usual, we employ a left-to-right numbering of cells and nodes. Row number \( i \) in the global matrix gets contributions from the first row of the element matrix in cell \( i-1 \) and the last row of the element matrix in cell \( i \). In cell number \( i-1 \) the sum \( \dfc(\tilde u_0) + \dfc(\tilde u_1) \) corresponds to \( \dfc(u_{i-1}) + \dfc(u_i) \). The same becomes \( \dfc(u_{i}) + \dfc(u_{i+1}) \) in cell number \( i \). We can with this insight assemble the contributions to row number \( i \) in the global matrix:

$$ \frac{1}{2h}(-(\dfc(u_{i-1}) + \dfc(u_i)),\quad \dfc(u_{i-1}) + 2\dfc(u_i) + \dfc(u_{i+1}),\quad \dfc(u_{i}) + \dfc(u_{i+1}))\tp $$ Multiplying by the vector of unknowns \( u_i \) results in

$$ \begin{equation} -\frac{1}{h}(\half(\dfc(u_i) + \dfc(u_{i+1}))(u_{i+1}-u_i) - \half(\dfc(u_{i-1}) + \dfc(u_{i}))(u_{i}-u_{i-1})), \tag{31} \end{equation} $$ which is nothing but the standard finite difference discretization of \( -(\dfc(u)u')' \) with an arithmetic mean of \( \dfc(u) \) (and a factor \( h \) because of the integration in the finite element method).

Instead of using the group finite element method and exact integration we can turn to the Trapezoidal rule for computing \( \int_0^L \dfc(\sum_k u_k\basphi_k)\basphi_i'\basphi_j'\dx \), again at the cell level since that is most convenient:

$$ \begin{align} &\int_{-1}^1 \dfc(\sum_{t=0}^1 \tilde u_t\refphi_t)\frac{2}{h}\frac{d\refphi_r}{dX} \frac{2}{h}\frac{d\refphi_s}{dX}\frac{h}{2}dX = \frac{1}{2h}(-1)^r(-1)^s \int_{-1}^1 \dfc(\sum_{t=0}^1 u_t\refphi_t(X))dX \nonumber\\ & \quad \approx \frac{1}{2h}(-1)^r(-1)^s\dfc ( \sum_{t=0}^1\refphi_t(-1)\tilde u_t) + \dfc(\sum_{t=0}^1\refphi_t(1)\tilde u_t) = \frac{1}{2h}(-1)^r(-1)^s(\dfc(\tilde u_0) + \dfc(\tilde u_1))\tp \tag{32} \end{align} $$ The element matrix in (32) is identical to the one in (30), showing that the group finite element method and Trapezoidal integration are equivalent with a standard finite discretization of a nonlinear Laplace term \( (\dfc(u)u')' \) using an arithmetic mean for \( \dfc \): \( [D_x\overline{x}D_xu]_i \).

We might comment on integration in the physical coordinate system too. The common Trapezoidal rule in the section Numerical integration of nonlinear terms cannot be used to integrate derivatives like \( \basphi_i' \), because the formula is derived under the assumption of a continuous integrand. One must instead use the more basic version of the Trapezoidal rule where all the trapezoids are summed up. This is straightforward, but I think it is even more straightforward to apply the Trapezoidal rule on the reference cell and assemble the contributions.

The term \( \int auv\dx \) in the variational form is linear and gives these terms in the algebraic equations:

$$ \frac{ah}{6}(u_{i-1} + 4u_i + u_{i+1}) = ah[u - \frac{h^2}{6}D_xD_x u]_i\tp$$ The final term in the variational form is the Neumann condition at the boundary: \( Cv(0)=C\basphi_i(0) \). With a left-to-right numbering only \( i=0 \) will give a contribution \( Cv(0)=C\delta_{i0} \) (since \( \basphi_i(0)\neq 0 \) only for \( i=0 \)).

Summary. For the equation

$$ -(\dfc(u)u')' +au = f(u),$$ P1 finite elements results in difference equations where

As we now have explicit expressions for the nonlinear difference equations also in the finite element method, a Picard or Newton method can be defined as shown for the finite difference method. Nevertheless, the general situation is that we have not assembled finite difference-style equations by hand and the linear system in the Picard or Newton method must therefore be defined directly through the variational form, as shown next.

Picard iteration defined from the variational form

We address again the problem (23) with the corresponding variational form (29). Our aim is to define a Picard iteration based on this variational form without any attempt to compute integrals symbolically as in the previous three sections. The idea in Picard iteration is to use a previously computed \( u \) value in the nonlinear functions \( \dfc(u) \) and \( f(u) \). Let \( u_{-} \) be the available approximation to \( u \) from the previous iteration. The linearized variational form for Picard iteration is then

$$ \begin{equation} \int_0^L (\dfc(u_{-})u'v' + auv)\dx = \int_0^L f(u_{-})v\dx - Cv(0),\quad \forall v\in V\tp \tag{33} \end{equation} $$ This is a linear problem \( a(u,v)=L(v) \) with bilinear and linear forms

$$ a(u,v) = \int_0^L (\dfc(u_{-})u'v' + auv)\dx,\quad L(v) = \int_0^L f(u_{-})v\dx - Cv(0)\tp$$ The associated linear system is computed the standard way. Technically, we are back to solving \( -(\dfc(x)u')' + au=f(x) \).

Newton's method defined from the variational form

Application of Newton's method to the nonlinear variational form (29) arising from the problem (23) requires identification of the nonlinear algebraic equations \( F_i(c_0,\ldots,c_N)=0 \), \( i\in\If \), and the Jacobian \( J_{i,j}=\partial F_i/\partial c_j \) for \( i,j\in\If \).

The equations \( F_i=0 \) follows from the variational form

$$ \int_0^L (\dfc(u)u'v' + auv)\dx = \int_0^L f(u)v\dx - Cv(0),\quad \forall v\in V, $$ by choosing \( v=\baspsi_i \), \( i\in\If \), and setting \( u=\sum_{j\in\If}c_j\baspsi_j \), maybe with a boundary function to incorporate Dirichlet conditions.

With \( v=\baspsi_i \) we have

$$ \begin{equation} F_i = \int_0^L (\dfc(u)u'\baspsi_i' + au\baspsi_i - f(u)\baspsi_i)\dx + C\baspsi_i(0)=0,\quad i\in\If\tp \tag{34} \end{equation} $$ In the differentiations leading to the Jacobian we will frequently use the results

$$ \frac{\partial u}{\partial c_j} = \frac{\partial}{\partial c_j} \sum_kc_k\baspsi_k = \baspsi_j,\quad \frac{\partial u'}{\partial c_j} = \frac{\partial}{\partial c_j} \sum_kc_k\baspsi_k' = \baspsi_j'\tp$$ The derivation of the Jacobian goes as

$$ \begin{align} J_{i,j} = \frac{\partial F_i}{\partial c_j} & = \int_0^L \frac{\partial}{\partial c_j} (\dfc(u)u'\baspsi_i' + au\baspsi_i - f(u)\baspsi_i)\dx\nonumber\\ &= \int_0^L ((\dfc'(u)\frac{\partial u}{\partial c_j}u' + \dfc(u)\frac{\partial u'}{\partial c_j})\baspsi_i' + a\frac{\partial u}{\partial c_j}\baspsi_i - f'(u)\frac{\partial u}{\partial c_j}\baspsi_i)\dx\nonumber\\ &= \int_0^L ((\dfc'(u)\baspsi_ju' + \dfc(u)\baspsi_j'\baspsi_i' + a\baspsi_j\baspsi_i - f'(u)\baspsi_j\baspsi_i)\dx\nonumber\\ &= \int_0^L (\dfc'(u)u'\baspsi_i'\baspsi_j + \dfc(u)\baspsi_i'\baspsi_j' + (a - f(u))\baspsi_i\baspsi_j)\dx \tag{37} \end{align} $$

When calculating the right-hand side vector \( F_i \) and the coefficient matrix \( J_{i,j} \) in the linear system to be solved in each Newton iteration, one must use a previously computed \( u \), denoted by \( u_{-} \), for the \( u \) in (34) and (37). With this notation we have

$$ \begin{align} F_i &= \int_0^L\left( \dfc(u_{-})u_{-}'\baspsi_i' + (a-f(u_{-}))\baspsi_i\right)\dx - C\baspsi_i(0),\quad i\in\If, \tag{36}\\ J_{i,j} &= \int_0^L (\dfc'(u_{-})u_{-}'\baspsi_i'\baspsi_j + \dfc(u_{-})\baspsi_i'\baspsi_j' + (a - f(u_{-}))\baspsi_i\baspsi_j)\dx, \quad i,j\in\If\tp \tag{37} \end{align} $$ These expressions can be used for any basis \( \sequencei{\baspsi} \). Choosing finite element functions for \( \baspsi_i \), one will normally want to compute the integral contribution cell by cell, working in a reference cell. To this end, we restrict the integration to one cell and transform the cell to \( [-1,1] \). The formulas (36) and (37) then change to

$$ \begin{align} \tilde F_r^{(e)} &= \int_{-1}^1\left( \dfc(\tilde u_{-})\tilde u_{-}'\refphi_r' + (a-f(\tilde u_{-}))\refphi_r\right)\det J\dX - C\refphi_r(0), \tag{38}\\ \tilde J_{r,s}^{(e)} &= \int_{-1}^1 (\dfc'(\tilde u_{-})\tilde u_{-}'\refphi_r'\refphi_s + \dfc(\tilde u_{-})\refphi_r'\refphi_s' + (a - f(\tilde u_{-}))\refphi_r\refphi_s)\det J\dX, \tag{39} \end{align} $$ with \( r,s\in\Ifd \) runs over the local degrees of freedom. In the above formulas, \( \tilde u_{-}(X)=\sum_r \tilde c_{-r}\refphi_r(X) \) is the finite element expansion of \( u_{-} \) over the current cell.

Many finite element programs require the user to provide \( F_i \) and \( J_{i,j} \). Some programs, like FEniCS, are capable of automatically deriving \( J_{i,j} \) if \( F_i \) is specified.

Dirichlet conditions

Incorporation of the Dirichlet values by assembling contributions from all degrees of freedom and then modifying the linear system can be obviously be applied to Picard iteration as that method involves a standard linear system. In the Newton system, however, the unknown is a correction \( \delta u \) to the solution. Dirichlet conditions are implemented by inserting them in the initial guess \( u_{-} \) for the Newton iteration and implementing \( \delta u_i =0 \) for all known degrees of freedom. The manipulation of the linear system follows exactly the algorithm in the linear problems, the only difference being that the known values are zero.

previous next