the section Linearization at the differential equation level presents methods for linearizing time-discrete PDEs directly prior to discretization in space. We can alternatively carry out the discretization in space and of the time-discrete nonlinear PDE problem and get a system of nonlinear algebraic equations, which can be solved by Picard iteration or Newton’s method as presented in the section Systems of nonlinear algebraic equations. This latter approach will now be described in detail.
We shall work with the 1D problem
This problem is of the same nature as those arising from implicit time integration of a nonlinear diffusion PDE as outlined in the section Picard iteration (3) (set \(a=1/\Delta t\) and let \(f(u)\) incorporate the nonlinear source term as well as known terms with the time-dependent unknown function at the previous time level).
The nonlinearity in the differential equation (1) poses no more difficulty than a variable coefficient, as in \(({\alpha}(x)u')'\). We can therefore use a standard approach to discretizing the Laplace term with a variable coefficient:
Writing this out for a uniform mesh with points \(x_i=i\Delta x\), \(i=0,\ldots,N_x\), leads to
This equation is valid at all the mesh points \(i=0,1,\ldots,N_x-1\). At \(i=N_x\) we have the Dirichlet condition \(u_i=0\). The only difference from the case with \(({\alpha}(x)u')'\) and \(f(x)\) is that now \({\alpha}\) and \(f\) are functions of \(u\) and not only on \(x\): \(({\alpha}(u(x))u')'\) and \(f(u(x))\).
The quantity \({\alpha}_{i+\frac{1}{2}}\), evaluated between two mesh points, needs a comment. Since \({\alpha}\) depends on \(u\) and \(u\) is only known at the mesh points, we need to express \({\alpha}_{i+\frac{1}{2}}\) in terms of \(u_i\) and \(u_{i+1}\). For this purpose we use an arithmetic mean, although a harmonic mean is also common in this context if \({\alpha}\) features large jumps. There are two choices of arithmetic means:
Equation (2) with the latter approximation then looks like
or written more compactly,
At mesh point \(i=0\) we have the boundary condition \({\alpha}(u)u'=C\), which is discretized by
meaning
The fictitious value \(u_{-1}\) can be eliminated with the aid of (5) for \(i=0\). Formally, (5) should be solved with respect to \(u_{i-1}\) and that value (for \(i=0\)) should be inserted in (6), but it is algebraically much easier to do it the other way around. Alternatively, one can use a ghost cell \([-\Delta x,0]\) and update the \(u_{-1}\) value in the ghost cell according to (6) after every Picard or Newton iteration. Such an approach means that we use a known \(u_{-1}\) value in (5) from the previous iteration.
The nonlinear algebraic equations (5) are of the form \(A(u)u = b(u)\) with
The matrix \(A(u)\) is tridiagonal: \(A_{i,j}=0\) for \(j>1+1\) and \(j<i-1\). The obvious Picard iteration scheme is to use previously computed values of \(u_i\) in \(A(u)\) and \(b(u)\), as described more in detail in the section Systems of nonlinear algebraic equations.
Newton’s method requires computation of the Jacobian. Here it means that we need to differentiate \(F(u)=A(u)u - b(u)\) with respect to \(u_0,u_1,\ldots,u_{N_x-1}\). Nonlinear equation number \(i\) has the structure
The Jacobian becomes
The explicit expression for nonlinear equation number \(i\), \(F_i(u_0,u_1,\ldots)\), arises from moving all terms in (5) to the left-hand side. Then we have \(J_{i,j}\) and \(F_i\) (modulo the boundary conditions) and can implement Newton’s method.
We have seen, and can see from the present example, that the linear system in Newton’s method contains all the terms present in the system that arises in the Picard iteration method. The extra terms in Newton’s method can be multiplied by a factor such that it is easy to program one linear system and set this factor to 0 or 1 to generate the Picard or Newton system.
For the finite element discretization we first need to derive the variational problem. Let \(V\) be an appropriate function space with basis functions \(\left\{ {{\psi}}_i \right\}_{i\in{\mathcal{I}_s}}\). Because of the Dirichlet condition at \(x=L\) we require \({\psi}_i(L)=0\), \(i\in{\mathcal{I}_s}\). Using Galerkin’s method, we multiply the differential equation by any \(v\in V\) and integrate terms with second-order derivatives by parts:
The Neumann condition at the boundary \(x=0\) is inserted in the boundary term:
To derive the algebraic equations we also demand the above equations to hold for \(v={\psi}_i\), \(i\in{\mathcal{I}_s}\), and we set \(u=\sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j\). The result is
Fundamental integration problem Methods that use the Galerkin or weighted residual principle face a fundamental difficulty in nonlinear problems: how can we integrate a terms like \(\int_0^L {\alpha}(\sum_{k}c_k{\psi}_k){\psi}_i'{\psi}_j'{\, \mathrm{d}x}\) and \(\int_0^L f(\sum_{k}c_k{\psi}_k){\psi}_i{\, \mathrm{d}x}\) when we do not know the \(c_k\) coefficients in the argument of the \({\alpha}\) function? We can resort to numerical integration, provided an approximate \(\sum_kc_k{\psi}_k\) can be used for the argument \(u\) in \(f\) and \({\alpha}\). If we want to derive the structure of the nonlinear algebraic equations, we need to apply numerical integration based on the nodes only and/or the group finite element method.
Let us simplify the model problem for a while and set \(a=0\), \({\alpha}=1\), \(f(u)=u^2\), and have Dirichlet conditions at both ends such that we get a very simple nonlinear problem \(-u''=u^2\). The variational form is then
The term with \(u'v'\) is well known so the only new feature is the term \(\int u^2v{\, \mathrm{d}x}\).
Introduction of finite element basis functions \({\varphi}_i\) means setting
where degree of freedom number \(\nu(j)\) in the mesh corresponds to unknown number \(j\). When the degrees of freedom are just the function values at nodes, we have that \(c_j=u(x_{\nu(j)})=u_{\nu(j)}\), i.e., the value of \(u\) at node number \(\nu(j)\). The finite element expansion for \(u\) is now
with the \(U_j\) quantities being prescribed Dirichlet values at some nodes with numbers in the index \({I_b}\). Instead of the \(\nu(j)\) indices in the sum \(\sum_{j\in{\mathcal{I}_s}}{\varphi}_{\nu(j)}u_{\nu(j)}\), we just write \(\sum_{j}{\varphi}_ju_j\). This is possible by saying that \(j\) runs over a transformed index set: \(\{\nu(0),\nu(1),\ldots,\nu(N)\}\). In the following we drop the boundary term \(\sum_j U_j{\varphi}_j\) and write \(u = \sum_j{\varphi}_j u_j\). The replacement of \(c_j\) by \(u_j\) as explained is motivated by simpler interpretation of the nonlinear algebraic equations as a finite difference scheme.
Consider the term \(\int u^2v{\, \mathrm{d}x}\) in the variational formulation with \(v={\varphi}_i\) and \(u=\sum_k{\varphi}_ku_k\):
Evaluating this integral for P1 elements (see Problem 9: Integrate functions of finite element expansions) results in the expression
to be compared with the simple value \(u_i^2\) that would arise in a finite difference discretization. More complicated \(f(u)\) functions give rise to much more lengthy expressions, if it is possible to carry out the integral symbolically.
Since we already expand \(u\) as \(\sum_j{\varphi}_ju_j\) we may use the same approximation for nonlinearities. That is, any function can be expanded as a sum of basis functions times the function values. In particular,
where the first sum contain \(f\) values at the boundary where \(u\) has Dirichlet conditions and the other sum is over the node values \(j\) where \(u\) is unknown. However, for \(f\) there is no reason two have two summations as we do not need to distinguish between the nodes where \(u\) are known or unknown. Therefore, we can collapse the two sums into one (over all nodes, \(j=0,\ldots,N_n\)) and write
This approximation is known as the group finite element method or the product approximation technique.
The principal advantage of the group finite element method is for deriving the symbolic form of difference equations in nonlinear problems. The symbolic form is useful for comparing finite element and finite difference equations of nonlinear differential equation problems. Computer programs will always integrate \(\int f(u){\varphi}_i{\, \mathrm{d}x}\) numerically by using an existing approximation of \(u\) in \(f(u)\) such that the integrand can be sampled at any spatial point.
Let use the group finite element method to derive the terms in the difference equation corresponding to \(f(u)\) in the differential equation. We have
We recognize this expression as the mass matrix \(M\), arising from \(\int{\varphi}_i{\varphi}_j{\, \mathrm{d}x}\), times the vector \(f=(f(u_0),f(u_1),\ldots,)\): \(Mf\). The associated terms in the difference equations are
Occasionally, we want to interpret this expression in terms of finite differences and then a rewrite of this expression is convenient:
We may lump the mass matrix through integration with the Trapezoidal rule. In that case the \(f(u)\) term in the differential equation gives rise to a single term \(hf(u_i)\), just as in the finite difference method.
Let us reconsider a term \(\int f(u)v{\, \mathrm{d}x}\) as treated in the previous section, but now we want to integrate this term numerically. Such an approach can lead to easy-to-interpret formulas if we apply a numerical integration rule that samples the integrand at the node points.
The term in question takes the form
Evaluation of the integrand at a node \(x_{\ell}\) leads to a collapse of the sum \(\sum_k u_k{\varphi}_k\) to one term because
where we have used the Kronecker delta \(\delta_{ij}=0\) if \(i\neq j\) and \(\delta_{ij}=1\) if \(i=j\).
Considering the Trapezoidal rule for integration, we have
The term \(\mathcal{C}\) contains the evaluations of the integrand at the ends with weight \(\frac{1}{2}\), needed to make a true Trapezoidal rule. The answer \(hf(u_i)\) must therefore be multiplied by \(\frac{1}{2}\) if \(i=0\) or \(i=N_n\). (\(\mathcal{C} = \frac{h}{2}f(u_0){\varphi}_i(0) + \frac{h}{2}f(u_{N_n}){\varphi}_i(L)\).)
One can easily use the Trapezoidal rule on the reference cell and assemble the contributions. It is a bit more work in this context, but working on the reference cell is safer as that approach is guaranteed to handle discontinuous derivatives of finite element functions correctly.
The conclusion is that it suffices to use the Trapezoidal rule if one wants to derive the difference equations in the finite element method and make them similar to those arising in the finite difference method. The Trapezoidal rule has sufficient accuracy for P1 elements, but for P2 elements one should turn to Simpson’s rule.
Turning back to the model problem (1), it remains to calculate the contribution of the \(({\alpha} u')'\) and boundary terms to the difference equations. The integral in the variational form corresponding to \(({\alpha} u')'\) is
Numerical integration utilizing a value of \(\sum_k c_k{\psi}_k\) from a previous iteration must in general be used to compute the integral. Now our aim is to integrate symbolically, as much as we can, to obtain some insight into how the finite element method approximates this term.
To be able to derive symbolic expressions, we either turn to the group finite element method or numerical integration in the node points. Finite element basis functions \({\varphi}_i\) are used, we set \({\alpha}(u)\approx \sum_k\alpha(u_k){\varphi}_k\), and then we write
Further calculations are now easiest to carry out in the reference cell. With P1 elements we can compute \(L_{i,j,k}\) for the two \(k\) values that are relevant on the reference cell. Turning to local indices, one gets
where \(r,s,t=0,1\) are indices over local degrees of freedom in the reference cell (\(i=q(e,r)\), \(j=q(e,s)\), and \(k=q(e,t)\)). The sum \(\sum_k L_{i,j,k}{\alpha}(u_k)\) at the cell level becomes \(\sum_{t=0}^1 L_{r,s,t}^{(e)}{\alpha}(\tilde u_t)\), where \(\tilde u_t\) is \(u(x_{q(e,t)})\), i.e., the value of \(u\) at local node number \(t\) in cell number \(e\). The element matrix becomes
As usual, we employ a left-to-right numbering of cells and nodes. Row number \(i\) in the global matrix gets contributions from the first row of the element matrix in cell \(i-1\) and the last row of the element matrix in cell \(i\). In cell number \(i-1\) the sum \({\alpha}(\tilde u_0) + {\alpha}(\tilde u_1)\) corresponds to \({\alpha}(u_{i-1}) + {\alpha}(u_i)\). The same becomes \({\alpha}(u_{i}) + {\alpha}(u_{i+1})\) in cell number \(i\). We can with this insight assemble the contributions to row number \(i\) in the global matrix:
Multiplying by the vector of unknowns \(u_i\) results in
which is nothing but the standard finite difference discretization of \(-({\alpha}(u)u')'\) with an arithmetic mean of \({\alpha}(u)\) (and a factor \(h\) because of the integration in the finite element method).
Instead of using the group finite element method and exact integration we can turn to the Trapezoidal rule for computing \(\int_0^L {\alpha}(\sum_k u_k{\varphi}_k){\varphi}_i'{\varphi}_j'{\, \mathrm{d}x}\), again at the cell level since that is most convenient:
The element matrix in (10) is identical to the one in (8), showing that the group finite element method and Trapezoidal integration are equivalent with a standard finite discretization of a nonlinear Laplace term \(({\alpha}(u)u')'\) using an arithmetic mean for \({\alpha}\): \([D_x\overline{x}D_xu]_i\).
We might comment on integration in the physical coordinate system too. The common Trapezoidal rule in the section Numerical integration of nonlinear terms cannot be used to integrate derivatives like \({\varphi}_i'\), because the formula is derived under the assumption of a continuous integrand. One must instead use the more basic version of the Trapezoidal rule where all the trapezoids are summed up. This is straightforward, but I think it is even more straightforward to apply the Trapezoidal rule on the reference cell and assemble the contributions.
The term \(\int auv{\, \mathrm{d}x}\) in the variational form is linear and gives these terms in the algebraic equations:
The final term in the variational form is the Neumann condition at the boundary: \(Cv(0)=C{\varphi}_i(0)\). With a left-to-right numbering only \(i=0\) will give a contribution \(Cv(0)=C\delta_{i0}\) (since \({\varphi}_i(0)\neq 0\) only for \(i=0\)).
Summary
For the equation
P1 finite elements results in difference equations where
- the term \(-({\alpha}(u)u')'\) becomes \(-h[D_x\overline{{\alpha}(u)}^xD_x u]_i\) if the group finite element method or Trapezoidal integration is applied,
- \(f(u)\) becomes \(hf(u_i)\) with Trapezoidal integration or the “mass matrix” representation \(h[f(u) - \frac{h}{6}D_xD_x f(u)]_i\) if computed by a group finite element method,
- \(au\) leads to the “mass matrix” form \(ah[u - \frac{h}{6}D_xD_x u]_i\).
As we now have explicit expressions for the nonlinear difference equations also in the finite element method, a Picard or Newton method can be defined as shown for the finite difference method. Nevertheless, the general situation is that we have not assembled finite difference-style equations by hand and the linear system in the Picard or Newton method must therefore be defined directly through the variational form, as shown next.
We address again the problem (1) with the corresponding variational form (?). Our aim is to define a Picard iteration based on this variational form without any attempt to compute integrals symbolically as in the previous three sections. The idea in Picard iteration is to use a previously computed \(u\) value in the nonlinear functions \({\alpha}(u)\) and \(f(u)\). Let \(u_{-}\) be the available approximation to \(u\) from the previous iteration. The linearized variational form for Picard iteration is then
This is a linear problem \(a(u,v)=L(v)\) with bilinear and linear forms
The associated linear system is computed the standard way. Technically, we are back to solving \(-({\alpha}(x)u')' + au=f(x)\).
Application of Newton’s method to the nonlinear variational form (?) arising from the problem (1) requires identification of the nonlinear algebraic equations \(F_i(c_0,\ldots,c_N)=0\), \(i\in{\mathcal{I}_s}\), and the Jacobian \(J_{i,j}=\partial F_i/\partial c_j\) for \(i,j\in{\mathcal{I}_s}\).
The equations \(F_i=0\) follows from the variational form
by choosing \(v={\psi}_i\), \(i\in{\mathcal{I}_s}\), and setting \(u=\sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j\), maybe with a boundary function to incorporate Dirichlet conditions.
With \(v={\psi}_i\) we have
In the differentiations leading to the Jacobian we will frequently use the results
The derivation of the Jacobian goes as
When calculating the right-hand side vector \(F_i\) and the coefficient matrix \(J_{i,j}\) in the linear system to be solved in each Newton iteration, one must use a previously computed \(u\), denoted by \(u_{-}\), for the \(u\) in (12) and (15). With this notation we have
These expressions can be used for any basis \(\left\{ {{\psi}}_i \right\}_{i\in{\mathcal{I}_s}}\). Choosing finite element functions for \({\psi}_i\), one will normally want to compute the integral contribution cell by cell, working in a reference cell. To this end, we restrict the integration to one cell and transform the cell to \([-1,1]\). The formulas (14) and (15) then change to
with \(r,s\in{I_d}\) runs over the local degrees of freedom. In the above formulas, \(\tilde u_{-}(X)=\sum_r \tilde c_{-r}{\tilde{\varphi}}_r(X)\) is the finite element expansion of \(u_{-}\) over the current cell.
Many finite element programs require the user to provide \(F_i\) and \(J_{i,j}\). Some programs, like FEniCS, are capable of automatically deriving \(J_{i,j}\) if \(F_i\) is specified.
Incorporation of the Dirichlet values by assembling contributions from all degrees of freedom and then modifying the linear system can be obviously be applied to Picard iteration as that method involves a standard linear system. In the Newton system, however, the unknown is a correction \(\delta u\) to the solution. Dirichlet conditions are implemented by inserting them in the initial guess \(u_{-}\) for the Newton iteration and implementing \(\delta u_i =0\) for all known degrees of freedom. The manipulation of the linear system follows exactly the algorithm in the linear problems, the only difference being that the known values are zero.