$$ \newcommand{\uex}{{u_{\small\mbox{e}}}} \newcommand{\Aex}{{A_{\small\mbox{e}}}} \newcommand{\half}{\frac{1}{2}} \newcommand{\tp}{\thinspace .} \newcommand{\Oof}[1]{\mathcal{O}(#1)} \newcommand{\x}{\boldsymbol{x}} \newcommand{\X}{\boldsymbol{X}} \renewcommand{\u}{\boldsymbol{u}} \renewcommand{\v}{\boldsymbol{v}} \newcommand{\e}{\boldsymbol{e}} \newcommand{\f}{\boldsymbol{f}} \newcommand{\dfc}{\alpha} % diffusion coefficient \newcommand{\Ix}{\mathcal{I}_x} \newcommand{\Iy}{\mathcal{I}_y} \newcommand{\Iz}{\mathcal{I}_z} \newcommand{\If}{\mathcal{I}_s} % for FEM \newcommand{\Ifd}{{I_d}} % for FEM \newcommand{\Ifb}{{I_b}} % for FEM \newcommand{\sequencei}[1]{\left\{ {#1}_i \right\}_{i\in\If}} \newcommand{\basphi}{\varphi} \newcommand{\baspsi}{\psi} \newcommand{\refphi}{\tilde\basphi} \newcommand{\psib}{\boldsymbol{\psi}} \newcommand{\sinL}[1]{\sin\left((#1+1)\pi\frac{x}{L}\right)} \newcommand{\xno}[1]{x_{#1}} \newcommand{\Xno}[1]{X_{(#1)}} \newcommand{\xdno}[1]{\boldsymbol{x}_{#1}} \newcommand{\dX}{\, \mathrm{d}X} \newcommand{\dx}{\, \mathrm{d}x} \newcommand{\ds}{\, \mathrm{d}s} $$

Study guide: Introduction to finite element methods

Hans Petter Langtangen [1, 2]

 

[1] Center for Biomedical Computing, Simula Research Laboratory
[2] Department of Informatics, University of Oslo

 

Oct 31, 2014

Why finite elements?

  • Can with ease solve PDEs in domains with complex geometry
  • Can with ease provide higher-order approximations
  • Has (in simpler stationary problems) a rigorous mathematical analysis framework (not much considered here)

Domain for flow around a dolphin

The flow

Solving PDEs by the finite element method

Stationary PDEs:

  • Transform the PDE problem to a variational form
  • Define function approximation over finite elements
  • Use a machinery to derive linear systems
  • Solve linear systems

Time-dependent PDEs:

  • Finite elements in space
  • Finite difference (or ODE solver) in time

We start with function approximation, then we treat PDEs

Learning strategy.

  • Start with approximation of functions, not PDEs
  • Introduce finite element approximations
  • See later how this machinery is applied to PDEs

Reason:

The finite element method has many concepts and a jungle of details. This learning strategy minimizes the mixing of ideas, concepts, and technical details.

Approximation in vector spaces

Approximation set-up

General idea of finding an approximation \( u(x) \) to some given \( f(x) \):

 
$$ u(x) = \sum_{i=0}^N c_i\baspsi_i(x) $$

 

where

  • \( \baspsi_i(x) \) are prescribed functions
  • \( c_i \), \( i=0,\ldots,N \) are unknown coefficients to be determined

How to determine the coefficients?

We shall address three approaches:

  • The least squares method
  • The projection (or Galerkin) method
  • The interpolation (or collocation) method

Underlying motivation for our notation.

Our mathematical framework for doing this is phrased in a way such that it becomes easy to understand and use the FEniCS software package for finite element computing.

Approximation of planar vectors; problem

Given a vector \( \f = (3,5) \), find an approximation to \( \f \) directed along a given line.

Approximation of planar vectors; vector space terminology

 
$$ V = \mbox{span}\,\{ \psib_0\} $$

 

  • \( \psib_0 \) is a basis vector in the space \( V \)
  • Seek \( \u = c_0\psib_0\in V \)
  • Determine \( c_0 \) such that \( \u \) is the "best" approximation to \( \f \)
  • Visually, "best" is obvious

Define

  • the error \( \e = \f - \u \)
  • the (Eucledian) scalar product of two vectors: \( (\u,\v) \)
  • the norm of \( \e \): \( ||\e|| = \sqrt{(\e, \e)} \)

The least squares method; principle

  • Idea: find \( c_0 \) such that \( ||\e|| \) is minimized
  • Mathematical convenience: minimize \( E=||\e||^2 \)

 
$$ \begin{equation*} \frac{\partial E}{\partial c_0} = 0 \end{equation*} $$

 

The least squares method; calculations

 
$$ \begin{align*} E(c_0) &= (\e,\e) = (\f - \u, (\f - \u) = (\f - c_0\psib_0, \f - c_0\psib_0)\\ &= (\f,\f) - 2c_0(\f,\psib_0) + c_0^2(\psib_0,\psib_0) \end{align*} $$

 

 
$$ \begin{equation} \frac{\partial E}{\partial c_0} = -2(\f,\psib_0) + 2c_0 (\psib_0,\psib_0) = 0 \tag{1} \end{equation} $$

 

 
$$ c_0 = \frac{(\f,\psib_0)}{(\psib_0,\psib_0)} = \frac{3a + 5b}{a^2 + b^2} $$

 

Observation to be used later: the vanishing derivative (1) can be alternatively written as

 
$$ (\e, \psib_0) = 0 $$

 

The projection (or Galerkin) method

  • Last slide: \( \min E \) is equivalent with \( (\e, \psib_0)=0 \)
  • \( (\e, \psib_0)=0 \) implies \( (\e, \v)=0 \) for any \( \v\in V \)
  • That is: instead of using the least-squares principle, we can require that \( \e \) is orthogonal to any \( \v\in V \)
    (visually clear, but can easily be computed too)
  • Precise math: find \( c_0 \) such that \( (\e, \v) = 0,\quad\forall\v\in V \)
  • Equivalent (see notes): find \( c_0 \) such that \( (\e, \psib_0)=0 \)
  • Insert \( \e = \f - c_0\psib_0 \) and solve for \( c_0 \)
  • Same equation for \( c_0 \) and hence same solution as in the least squares method

Approximation of general vectors

Given a vector \( \f \), find an approximation \( \u\in V \):

 
$$ \begin{equation*} V = \hbox{span}\,\{\psib_0,\ldots,\psib_N\} \end{equation*} $$

 

We have a set of linearly independent basis vectors \( \psib_0,\ldots,\psib_N \). Any \( \u\in V \) can then be written as

 
$$ \u = \sum_{j=0}^Nc_j\psib_j$$

 

The least squares method

Idea: find \( c_0,\ldots,c_N \) such that \( E= ||\e||^2 \) is minimized, \( \e=\f-\u \).

 
$$ \begin{align*} E(c_0,\ldots,c_N) &= (\e,\e) = (\f -\sum_jc_j\psib_j,\f -\sum_jc_j\psib_j) \nonumber\\ &= (\f,\f) - 2\sum_{j=0}^Nc_j(\f,\psib_j) + \sum_{p=0}^N\sum_{q=0}^N c_pc_q(\psib_p,\psib_q) \end{align*} $$

 

 
$$ \begin{equation*} \frac{\partial E}{\partial c_i} = 0,\quad i=0,\ldots,N \end{equation*} $$

 

After some work we end up with a linear system

 
$$ \begin{align} \sum_{j=0}^N A_{i,j}c_j &= b_i,\quad i=0,\ldots,N\\ A_{i,j} &= (\psib_i,\psib_j)\\ b_i &= (\psib_i, \f) \end{align} $$

 

The projection (or Galerkin) method

Can be shown that minimizing \( ||\e|| \) implies that \( \e \) is orthogonal to all \( \v\in V \):

 
$$ (\e,\v)=0,\quad \forall\v\in V $$

 
which implies that \( \e \) most be orthogonal to each basis vector:

 
$$ (\e,\psib_i)=0,\quad i=0,\ldots,N $$

 

This orthogonality condition is the principle of the projection (or Galerkin) method. Leads to the same linear system as in the least squares method.

Approximation of functions

Let \( V \) be a function space spanned by a set of basis functions \( \baspsi_0,\ldots,\baspsi_N \),

 
$$ \begin{equation*} V = \hbox{span}\,\{\baspsi_0,\ldots,\baspsi_N\} \end{equation*} $$

 

Find \( u\in V \) as a linear combination of the basis functions:

 
$$ u = \sum_{j\in\If} c_j\baspsi_j,\quad\If = \{0,1,\ldots,N\} $$

 

The least squares method can be extended from vectors to functions

As in the vector case, minimize the (square) norm of the error, \( E \), with respect to the coefficients \( c_j \), \( j\in\If \):

 
$$ E = (e,e) = (f-u,f-u) = (f(x)-\sum_{j\in\If} c_j\baspsi_j(x), f(x)-\sum_{j\in\If} c_j\baspsi_j(x)) $$

 

 
$$ \frac{\partial E}{\partial c_i} = 0,\quad i=\in\If $$

 

But what is the scalar product when \( \baspsi_i \) is a function?

 
$$(f,g) = \int_\Omega f(x)g(x)\, dx$$

 
(natural extension from Eucledian product \( (\u, \v) = \sum_j u_jv_j \))

The least squares method; details

 
$$ \begin{align*} E(c_0,\ldots,c_N) &= (e,e) = (f-u,f-u) \\ &= (f,f) -2\sum_{j\in\If} c_j(f,\baspsi_i) + \sum_{p\in\If}\sum_{q\in\If} c_pc_q(\baspsi_p,\baspsi_q) \end{align*} $$

 

 
$$ \frac{\partial E}{\partial c_i} = 0,\quad i=\in\If $$

 

The computations are identical to the vector case, and consequently we get a linear system

 
$$ \sum_{j\in\If}^N A_{i,j}c_j = b_i,\ i\in\If,\quad A_{i,j} = (\baspsi_i,\baspsi_j),\ b_i = (f,\baspsi_i) $$

 

The projection (or Galerkin) method

As before, minimizing \( (e,e) \) is equivalent to

 
$$ (e,\baspsi_i)=0,\quad i\in\If \tag{2} $$

 

which is equivalent to

 
$$ (e,v)=0,\quad\forall v\in V \tag{3} $$

 
which is the projection (or Galerkin) method.

The algebra is the same as in the multi-dimensional vector case, and we get the same linear system as arose from the least squares method.

Example: linear approximation; problem

Problem.

Approximate a parabola \( f(x) = 10(x-1)^2 - 1 \) by a straight line.

 
$$ \begin{equation*} V = \hbox{span}\,\{1, x\} \end{equation*} $$

 
That is, \( \baspsi_0(x)=1 \), \( \baspsi_1(x)=x \), and \( N=1 \). We seek

 
$$ \begin{equation*} u=c_0\baspsi_0(x) + c_1\baspsi_1(x) = c_0 + c_1x \end{equation*} $$

 

Example: linear approximation; solution

 
$$ \begin{align*} A_{0,0} &= (\baspsi_0,\baspsi_0) = \int_1^21\cdot 1\, dx = 1\\ A_{0,1} &= (\baspsi_0,\baspsi_1) = \int_1^2 1\cdot x\, dx = 3/2\\ A_{1,0} &= A_{0,1} = 3/2\\ A_{1,1} &= (\baspsi_1,\baspsi_1) = \int_1^2 x\cdot x\,dx = 7/3\\ b_1 &= (f,\baspsi_0) = \int_1^2 (10(x-1)^2 - 1)\cdot 1 \, dx = 7/3\\ b_2 &= (f,\baspsi_1) = \int_1^2 (10(x-1)^2 - 1)\cdot x\, dx = 13/3 \end{align*} $$

 
Solution of 2x2 linear system:

 
$$ c_0 = -38/3,\quad c_1 = 10,\quad u(x) = 10x - \frac{38}{3} $$

 

Example: linear approximation; plot

Implementation of the least squares method; ideas

Consider symbolic computation of the linear system, where

  • \( f(x) \) is given as a sympy expression f (involving the symbol x),
  • psi is a list of \( \sequencei{\baspsi} \),
  • Omega is a 2-tuple/list holding the domain \( \Omega \)

Carry out the integrations, solve the linear system, and return \( u(x)=\sum_jc_j\baspsi_j(x) \)

Implementation of the least squares method; symbolic code

import sympy as sp

def least_squares(f, psi, Omega):
    N = len(psi) - 1
    A = sp.zeros((N+1, N+1))
    b = sp.zeros((N+1, 1))
    x = sp.Symbol('x')
    for i in range(N+1):
        for j in range(i, N+1):
            A[i,j] = sp.integrate(psi[i]*psi[j],
                                  (x, Omega[0], Omega[1]))
            A[j,i] = A[i,j]
        b[i,0] = sp.integrate(psi[i]*f, (x, Omega[0], Omega[1]))
    c = A.LUsolve(b)
    u = 0
    for i in range(len(psi)):
        u += c[i,0]*psi[i]
    return u, c

Observe: symmetric coefficient matrix so we can halve the integrations.

Improved code if symbolic integration fails

  • If sp.integrate fails, it returns an sp.Integral object. We can test on this object and fall back on numerical integration.
  • We can include a boolean argument symbolic to explicitly choose between symbolic and numerical computing.

def least_squares(f, psi, Omega, symbolic=True):
    ...
    for i in range(N+1):
        for j in range(i, N+1):
            integrand = psi[i]*psi[j]
            if symbolic:
                I = sp.integrate(integrand, (x, Omega[0], Omega[1]))
            if not symbolic or isinstance(I, sp.Integral):
                # Could not integrate symbolically,
                # fall back on numerical integration
                integrand = sp.lambdify([x], integrand)
                I = sp.mpmath.quad(integrand, [Omega[0], Omega[1]])
            A[i,j] = A[j,i] = I

        integrand = psi[i]*f
        if symbolic:
            I = sp.integrate(integrand, (x, Omega[0], Omega[1]))
        if not symbolic or isinstance(I, sp.Integral):
            integrand = sp.lambdify([x], integrand)
            I = sp.mpmath.quad(integrand, [Omega[0], Omega[1]])
        b[i,0] = I
    ...

Implementation of the least squares method; plotting

Compare \( f \) and \( u \) visually:

def comparison_plot(f, u, Omega, filename='tmp.pdf'):
    x = sp.Symbol('x')
    # Turn f and u to ordinary Python functions
    f = sp.lambdify([x], f, modules="numpy")
    u = sp.lambdify([x], u, modules="numpy")
    resolution = 401  # no of points in plot
    xcoor  = linspace(Omega[0], Omega[1], resolution)
    exact  = f(xcoor)
    approx = u(xcoor)
    plot(xcoor, approx)
    hold('on')
    plot(xcoor, exact)
    legend(['approximation', 'exact'])
    savefig(filename)

All code in module approx1D.py

Implementation of the least squares method; application

>>> from approx1D import *
>>> x = sp.Symbol('x')
>>> f = 10*(x-1)**2-1
>>> u, c = least_squares(f=f, psi=[1, x], Omega=[1, 2])
>>> comparison_plot(f, u, Omega=[1, 2])

Perfect approximation; parabola approximating parabola

  • What if we add \( \baspsi_2=x^2 \) to the space \( V \)?
  • That is, approximating a parabola by any parabola?
  • (Hopefully we get the exact parabola!)

>>> from approx1D import *
>>> x = sp.Symbol('x')
>>> f = 10*(x-1)**2-1
>>> u, c = least_squares(f=f, psi=[1, x, x**2], Omega=[1, 2])
>>> print u
10*x**2 - 20*x + 9
>>> print sp.expand(f)
10*x**2 - 20*x + 9

Perfect approximation; the general result

  • What if we use \( \psi_i(x)=x^i \) for \( i=0,\ldots,N=40 \)?
  • The output from least_squares is \( c_i=0 \) for \( i>2 \)

General result.

If \( f\in V \), least squares and projection/Galerkin give \( u=f \).

Perfect approximation; proof of the general result

If \( f\in V \), \( f=\sum_{j\in\If}d_j\baspsi_j \), for some \( \sequencei{d} \). Then

 
$$ \begin{equation*} b_i = (f,\baspsi_i) = \sum_{j\in\If}d_j(\baspsi_j, \baspsi_i) = \sum_{j\in\If} d_jA_{i,j} \end{equation*} $$

 
The linear system \( \sum_j A_{i,j}c_j = b_i \), \( i\in\If \), is then

 
$$ \begin{equation*} \sum_{j\in\If}c_jA_{i,j} = \sum_{j\in\If}d_jA_{i,j},\quad i\in\If \end{equation*} $$

 
which implies that \( c_i=d_i \) for \( i\in\If \) and \( u \) is identical to \( f \).

Finite-precision/numerical computations; question

The previous computations were symbolic. What if we solve the linear system numerically with standard arrays?

That is, \( f \) is parabola, but we approximate with

 
$$ u(x) = c_0 + c_1x + c_2x^2 + c_3x^3 +\cdots + c_Nx^N $$

 

We expect \( c_2=c_3=\cdots=c_N=0 \) since \( f\in V \) implies \( u=f \).

Will we get this result with finite precision computer arithmetic?

Finite-precision/numerical computations; results

exact sympy numpy32 numpy64
9 9.62 5.57 8.98
-20 -23.39 -7.65 -19.93
10 17.74 -4.50 9.96
0 -9.19 4.13 -0.26
0 5.25 2.99 0.72
0 0.18 -1.21 -0.93
0 -2.48 -0.41 0.73
0 1.81 -0.013 -0.36
0 -0.66 0.08 0.11
0 0.12 0.04 -0.02
0 -0.001 -0.02 0.002

  • Column 2: matrix and lu_solve from sympy.mpmath.fp
  • Column 3: numpy matrix with 4-byte floats
  • Column 4: numpy matrix with 8-byte floats

The ill-conditioning is due to almost linearly dependent basis functions for large \( N \)

  • Significant round-off errors in the numerical computations (!)
  • But if we plot the approximations they look good (!)

Source or problem: the \( x^i \) functions become almost linearly dependent as \( i \) grows:

Ill-conditioning: general conclusions

  • Almost linearly dependent basis functions give almost singular matrices
  • Such matrices are said to be ill conditioned, and Gaussian elimination is severely affected by round-off errors
  • The basis \( 1, x, x^2, x^3, x^4, \ldots \) is a bad basis
  • Polynomials are fine as basis, but the more orthogonal they are, \( (\baspsi_i,\baspsi_j)\approx 0 \), the better

Fourier series approximation; problem and code

Let's approximate \( f \) by a typical Fourier series expansion

 
$$ u(x) = \sum_i a_i\sin i\pi x = \sum_{j=0}^Nc_j\sin((j+1)\pi x) $$

 

which means that

 
$$ \begin{equation*} V = \hbox{span}\,\{ \sin \pi x, \sin 2\pi x,\ldots,\sin (N+1)\pi x\} \end{equation*} $$

 

Computations using the least_squares function:

N = 3
from sympy import sin, pi
psi = [sin(pi*(i+1)*x) for i in range(N+1)]
f = 10*(x-1)**2 - 1
Omega = [0, 1]
u, c = least_squares(f, psi, Omega)
comparison_plot(f, u, Omega)

Fourier series approximation; plot

Left: \( N=3 \), right: \( N=11 \):

Problem:

All \( \baspsi_i(0)=0 \) and hence \( u(0)=0 \neq f(0)=9 \). Similar problem at \( x=1 \). The boundary values of \( u \) are always wrong!

Fourier series approximation; improvements

  • Considerably improvement with \( N=11 \), but still undesired discrepancy at \( x=0 \) and \( x=1 \)
  • Possible remedy: add a term that leads to correct boundary values

 
$$ u(x) = {\color{red}f(0)(1-x) + xf(1)} + \sum_{j\in\If} c_j\baspsi_j(x) $$

 
The extra terms ensure \( u(0)=f(0) \) and \( u(1)=f(1) \) and is a strikingly good help to get a good approximation!

Fourier series approximation; final results

\( N=3 \) vs \( N=11 \):

Orthogonal basis functions

This choice of sine functions as basis functions is popular because

  • the basis functions are orthogonal: \( (\baspsi_i,\baspsi_j)=0 \)
  • implying that \( A_{i,j} \) is a diagonal matrix
  • implying that we can solve for \( c_i = 2\int_0^1 f(x)\sin ((i+1)\pi x) dx \)
  • and what we get is the standard Fourier sine series of \( f \)

In general, for an orthogonal basis, \( A_{i,j} \) is diagonal and we can easily solve for \( c_i \):

 
$$ c_i = \frac{b_i}{A_{i,i}} = \frac{(f,\baspsi_i)}{(\baspsi_i,\baspsi_i)} $$

 

Function for the least squares method with orthogonal basis functions

def least_squares_orth(f, psi, Omega):
    N = len(psi) - 1
    A = [0]*(N+1)
    b = [0]*(N+1)
    x = sp.Symbol('x')
    for i in range(N+1):
        A[i] = sp.integrate(psi[i]**2, (x, Omega[0], Omega[1]))
        b[i] = sp.integrate(psi[i]*f,  (x, Omega[0], Omega[1]))
    c = [b[i]/A[i] for i in range(len(b))]
    u = 0
    for i in range(len(psi)):
        u += c[i]*psi[i]
    return u, c

Function for the least squares method with orthogonal basis functions; symbolic and numerical integration

Extensions:

  • We can choose between symbolic or numerical integration (symbolic argument).
  • If symbolic, we fall back on numerical integration after failure (sp.Integral is returned from sp.integrate).

def least_squares_orth(f, psi, Omega, symbolic=True):
    ...
    for i in range(N+1):
        # Diagonal matrix term
        A[i] = sp.integrate(psi[i]**2, (x, Omega[0], Omega[1]))

        # Right-hand side term
        integrand = psi[i]*f
        if symbolic:
            I = sp.integrate(integrand,  (x, Omega[0], Omega[1]))
        if not symbolic or isinstance(I, sp.Integral):
            print 'numerical integration of', integrand
            integrand = sp.lambdify([x], integrand)
            I = sp.mpmath.quad(integrand, [Omega[0], Omega[1]])
        b[i] = I
    ...

Assumption above: \( \int_\Omega\basphi_i^2dx \) works symbolically (but there is no guarantee!)

The collocation or interpolation method; ideas and math

Here is another idea for approximating \( f(x) \) by \( u(x)=\sum_jc_j\baspsi_j \):

  • Force \( u(\xno{i}) = f(\xno{i}) \) at some selected collocation points \( \sequencei{x} \)
  • Then \( u \) is said to interpolate \( f \)
  • The method is known as interpolation or collocation

 
$$ u(\xno{i}) = \sum_{j\in\If} c_j \baspsi_j(\xno{i}) = f(\xno{i}) \quad i\in\If,N $$

 

This is a linear system with no need for integration:

 
$$ \begin{align} \sum_{j\in\If} A_{i,j}c_j &= b_i,\quad i\in\If\\ A_{i,j} &= \baspsi_j(\xno{i})\\ b_i &= f(\xno{i}) \end{align} $$

 

No symmetric matrix: \( \baspsi_j(\xno{i})\neq \baspsi_i(\xno{j}) \) in general

The collocation or interpolation method; implementation

points holds the interpolation/collocation points

def interpolation(f, psi, points):
    N = len(psi) - 1
    A = sp.zeros((N+1, N+1))
    b = sp.zeros((N+1, 1))
    x = sp.Symbol('x')
    # Turn psi and f into Python functions
    psi = [sp.lambdify([x], psi[i]) for i in range(N+1)]
    f = sp.lambdify([x], f)
    for i in range(N+1):
        for j in range(N+1):
            A[i,j] = psi[j](points[i])
        b[i,0] = f(points[i])
    c = A.LUsolve(b)
    u = 0
    for i in range(len(psi)):
        u += c[i,0]*psi[i](x)
    return u

The collocation or interpolation method; approximating a parabola by linear functions

  • Potential difficulty: how to choose \( \xno{i} \)?
  • The results are sensitive to the points!

\( (4/3,5/3) \) vs \( (1,2) \):

Lagrange polynomials; motivation and ideas

Motivation:

  • The interpolation/collocation method avoids integration
  • With a diagonal matrix \( A_{i,j} = \baspsi_j(\xno{i}) \) we can solve the linear system by hand

The Lagrange interpolating polynomials \( \baspsi_j \) have the property that

 
$$ \baspsi_i(\xno{j}) =\delta_{ij},\quad \delta_{ij} = \left\lbrace\begin{array}{ll} 1, & i=j\\ 0, & i\neq j \end{array}\right. $$

 

Hence, \( c_i = f(x_i) \) and

 
$$ u(x) = \sum_{j\in\If} f(\xno{i})\baspsi_i(x) $$

 

  • Lagrange polynomials and interpolation/collocation look convenient
  • Lagrange polynomials are very much used in the finite element method

Lagrange polynomials; formula and code

 
$$ \baspsi_i(x) = \prod_{j=0,j\neq i}^N \frac{x-\xno{j}}{\xno{i}-\xno{j}} = \frac{x-x_0}{\xno{i}-x_0}\cdots\frac{x-\xno{i-1}}{\xno{i}-\xno{i-1}}\frac{x-\xno{i+1}}{\xno{i}-\xno{i+1}} \cdots\frac{x-x_N}{\xno{i}-x_N} $$

 

def Lagrange_polynomial(x, i, points):
    p = 1
    for k in range(len(points)):
        if k != i:
            p *= (x - points[k])/(points[i] - points[k])
    return p

Lagrange polynomials; successful example

Lagrange polynomials; a less successful example

Lagrange polynomials; oscillatory behavior

12 points, degree 11, plot of two of the Lagrange polynomials - note that they are zero at all points except one.

Problem: strong oscillations near the boundaries for larger \( N \) values.

Lagrange polynomials; remedy for strong oscillations

The oscillations can be reduced by a more clever choice of interpolation points, called the Chebyshev nodes:

 
$$ \xno{i} = \half (a+b) + \half(b-a)\cos\left( \frac{2i+1}{2(N+1)}pi\right),\quad i=0\ldots,N $$

 
on an interval \( [a,b] \).

Lagrange polynomials; recalculation with Chebyshev nodes

Lagrange polynomials; less oscillations with Chebyshev nodes

12 points, degree 11, plot of two of the Lagrange polynomials - note that they are zero at all points except one.

Finite element basis functions

The basis functions have so far been global: \( \baspsi_i(x) \neq 0 \) almost everywhere

In the finite element method we use basis functions with local support

  • Local support: \( \baspsi_i(x) \neq 0 \) for \( x \) in a small subdomain of \( \Omega \)
  • Typically hat-shaped
  • \( u(x) \) based on these \( \baspsi_i \) is a piecewise polynomial defined over many (small) subdomains
  • We introduce \( \basphi_i \) as the name of these finite element hat functions (and for now choose \( \baspsi_i=\basphi_i \))

The linear combination of hat functions is a piecewise linear function

Elements and nodes

Split \( \Omega \) into non-overlapping subdomains called elements:

 
$$ \Omega = \Omega^{(0)}\cup \cdots \cup \Omega^{(N_e)} $$

 

On each element, introduce points called nodes: \( \xno{0},\ldots,\xno{N_n} \)

  • The finite element basis functions are named \( \basphi_i(x) \)
  • \( \basphi_i=1 \) at node \( i \) and 0 at all other nodes
  • \( \basphi_i \) is a Lagrange polynomial on each element
  • For nodes at the boundary between two elements, \( \basphi_i \) is made up of a Lagrange polynomial over each element

Example on elements with two nodes (P1 elements)

Data structure: nodes holds coordinates or nodes, elements holds the node numbers in each element

nodes = [0, 1.2, 2.4, 3.6, 4.8, 5]
elements = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5]]

Illustration of two basis functions on the mesh

Example on elements with three nodes (P2 elements)

nodes = [0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0]
elements = [[0, 1, 2], [2, 3, 4], [4, 5, 6], [6, 7, 8]]

Some corresponding basis functions (P2 elements)

Examples on elements with four nodes (P3 elements)

d = 3  # d+1 nodes per element
num_elements = 4
num_nodes = num_elements*d + 1
nodes = [i*0.5 for i in range(num_nodes)]
elements = [[i*d+j for j in range(d+1)] for i in range(num_elements)]

Some corresponding basis functions (P3 elements)

The numbering does not need to be regular from left to right

nodes = [1.5, 5.5, 4.2, 0.3, 2.2, 3.1]
elements = [[2, 1], [4, 5], [0, 4], [3, 0], [5, 2]]

Interpretation of the coefficients \( c_i \)

Important property: \( c_i \) is the value of \( u \) at node \( i \), \( \xno{i} \):

 
$$ u(\xno{i}) = \sum_{j\in\If} c_j\basphi_j(\xno{i}) = c_i\basphi_i(\xno{i}) = c_i $$

 

because \( \basphi_j(\xno{i}) =0 \) if \( i\neq j \) and \( \basphi_i(\xno{i}) =1 \)

Properties of the basis functions

  • \( \basphi_i(x) \neq 0 \) only on those elements that contain global node \( i \)
  • \( \basphi_i(x)\basphi_j(x) \neq 0 \) if and only if \( i \) and \( j \) are global node numbers in the same element

Since \( A_{i,j}=\int\basphi_i\basphi_j\dx \), most of the elements in the coefficient matrix will be zero

How to construct quadratic \( \basphi_i \) (P2 elements)

  1. Associate Lagrange polynomials with the nodes in an element
  2. When the polynomial is 1 on the element boundary, combine it with the polynomial in the neighboring element that is also 1 at the same point

Example on linear \( \basphi_i \) (P1 elements)

 
$$ \basphi_i(x) = \left\lbrace\begin{array}{ll} 0, & x < \xno{i-1}\\ (x - \xno{i-1})/h & \xno{i-1} \leq x < \xno{i}\\ 1 - (x - x_{i})/h, & \xno{i} \leq x < \xno{i+1}\\ 0, & x\geq \xno{i+1} \end{array} \right. $$

 

Example on cubic \( \basphi_i \) (P3 elements)

Calculating the linear system for \( c_i \)

Computing a specific matrix entry (1)

\( A_{2,3}=\int_\Omega\basphi_2\basphi_3 dx \): \( \basphi_2\basphi_3\neq 0 \) only over element 2. There,

 
$$ \basphi_3(x) = (x-x_2)/h,\quad \basphi_2(x) = 1- (x-x_2)/h$$

 

 
$$ A_{2,3} = \int_\Omega \basphi_2\basphi_{3}\dx = \int_{\xno{2}}^{\xno{3}} \left(1 - \frac{x - \xno{2}}{h}\right) \frac{x - x_{2}}{h} \dx = \frac{h}{6} $$

 

Computing a specific matrix entry (2)

 
$$ A_{2,2} = \int_{\xno{1}}^{\xno{2}} \left(\frac{x - \xno{1}}{h}\right)^2\dx + \int_{\xno{2}}^{\xno{3}} \left(1 - \frac{x - \xno{2}}{h}\right)^2\dx = \frac{2h}{3} $$

 

Calculating a general row in the matrix; figure

 
$$ A_{i,i-1} = \int_\Omega \basphi_i\basphi_{i-1}\dx = \hbox{?}$$

 

Calculating a general row in the matrix; details

 
$$ \begin{align*} A_{i,i-1} &= \int_\Omega \basphi_i\basphi_{i-1}\dx\\ &= \underbrace{\int_{\xno{i-2}}^{\xno{i-1}} \basphi_i\basphi_{i-1}\dx}_{\basphi_i=0} + \int_{\xno{i-1}}^{\xno{i}} \basphi_i\basphi_{i-1}\dx + \underbrace{\int_{\xno{i}}^{\xno{i+1}} \basphi_i\basphi_{i-1}\dx}_{\basphi_{i-1}=0}\\ &= \int_{\xno{i-1}}^{\xno{i}} \underbrace{\left(\frac{x - x_{i}}{h}\right)}_{\basphi_i(x)} \underbrace{\left(1 - \frac{x - \xno{i-1}}{h}\right)}_{\basphi_{i-1}(x)} \dx = \frac{h}{6} \end{align*} $$

 

  • \( A_{i,i+1}=A_{i,i-1} \) due to symmetry
  • \( A_{i,i}=2h/3 \) (same calculation as for \( A_{2,2} \))
  • \( A_{0,0}=A_{N,N}=h/3 \) (only one element)

Calculation of the right-hand side

 
$$ b_i = \int_\Omega\basphi_i(x)f(x)\dx = \int_{\xno{i-1}}^{\xno{i}} \frac{x - \xno{i-1}}{h} f(x)\dx + \int_{x_{i}}^{\xno{i+1}} \left(1 - \frac{x - x_{i}}{h}\right) f(x) \dx $$

 

Need a specific \( f(x) \) to do more...

Specific example with two elements; linear system and solution

  • \( f(x)=x(1-x) \) on \( \Omega=[0,1] \)
  • Two equal-sized elements \( [0,0.5] \) and \( [0.5,1] \)

 
$$ \begin{equation*} A = \frac{h}{6}\left(\begin{array}{ccc} 2 & 1 & 0\\ 1 & 4 & 1\\ 0 & 1 & 2 \end{array}\right),\quad b = \frac{h^2}{12}\left(\begin{array}{c} 2 - 3h\\ 12 - 14h\\ 10 -17h \end{array}\right) \end{equation*} $$

 

 
$$ \begin{equation*} c_0 = \frac{h^2}{6},\quad c_1 = h - \frac{5}{6}h^2,\quad c_2 = 2h - \frac{23}{6}h^2 \end{equation*} $$

 

Specific example with two elements; plot

 
$$ \begin{equation*} u(x)=c_0\basphi_0(x) + c_1\basphi_1(x) + c_2\basphi_2(x)\end{equation*} $$

 

Specific example with four elements; plot

Specific example: what about P2 elements?

Recall: if \( f\in V \), \( u \) becomes exact. When \( f \) is a parabola, any choice of P2 elements (1 or many) will give \( u=f \) exactly. The same is true for P3, P4, ... elements since all of them can represent a 2nd-degree polynomial exactly.

Assembly of elementwise computations

Split the integrals into elementwise integrals

 
$$ A_{i,j} = \int_\Omega\basphi_i\basphi_jdx = \sum_{e} \int_{\Omega^{(e)}} \basphi_i\basphi_jdx,\quad A^{(e)}_{i,j}=\int_{\Omega^{(e)}} \basphi_i\basphi_jdx $$

 

Important observations:

  • \( A^{(e)}_{i,j}\neq 0 \) if and only if \( i \) and \( j \) are nodes in element \( e \) (otherwise no overlap between the basis functions)
  • All the nonzero elements in \( A^{(e)}_{i,j} \) are collected in an element matrix
  • The element matrix has contributions from the \( \basphi_i \) functions associated with the nodes in element
  • It is convenient to introduce a local numbering of the nodes in an element: \( 0,1,\ldots,d \)

The element matrix and local vs global node numbers

 
$$ \tilde A^{(e)} = \{ \tilde A^{(e)}_{r,s}\},\quad \tilde A^{(e)}_{r,s} = \int_{\Omega^{(e)}}\basphi_{q(e,r)}\basphi_{q(e,s)}dx, \quad r,s\in\Ifd=\{0,\ldots,d\} $$

 

Now,

  • \( r,s \) run over local node numbers in an element: \( 0, 1,\ldots, d \)
  • \( i,j \) run over global node numbers \( i,j\in\If = \{0,1,\ldots,N\} \)
  • \( i=q(e,r) \): mapping of local node number \( r \) in element \( e \) to the global node number \( i \) (math equivalent to i=elements[e][r])
  • Add \( \tilde A^{(e)}_{r,s} \) into the global \( A_{i,j} \) (assembly)

 
$$ A_{q(e,r),q(e,s)} := A_{q(e,r),q(e,s)} + \tilde A^{(e)}_{r,s},\quad r,s\in\Ifd $$

 

Illustration of the matrix assembly: regularly numbered P1 elements

Animation

Illustration of the matrix assembly: regularly numbered P3 elements

Animation

Illustration of the matrix assembly: irregularly numbered P1 elements

Animation

Assembly of the right-hand side

 
$$ b_i = \int_\Omega f(x)\basphi_i(x)dx = \sum_{e} \int_{\Omega^{(e)}} f(x)\basphi_i(x)dx,\quad b^{(e)}_{i}=\int_{\Omega^{(e)}} f(x)\basphi_i(x)dx $$

 

Important observations:

  • \( b_i^{(e)}\neq 0 \) if and only if global node \( i \) is a node in element \( e \) (otherwise \( \basphi_i=0 \))
  • The \( d+1 \) nonzero \( b_i^{(e)} \) can be collected in an element vector \( \tilde b_r^{(e)}=\{ \tilde b_r^{(e)}\} \), \( r\in\Ifd \)

Assembly:

 
$$ b_{q(e,r)} := b_{q(e,r)} + \tilde b^{(e)}_{r},\quad r,s\in\Ifd $$

 

Mapping to a reference element

Instead of computing

 
$$ \begin{equation*} \tilde A^{(e)}_{r,s} = \int_{\Omega^{(e)}}\basphi_{q(e,r)}(x)\basphi_{q(e,s)}(x)dx = \int_{x_L}^{x_R}\basphi_{q(e,r)}(x)\basphi_{q(e,s)}(x)dx \end{equation*} $$

 
we now map \( [x_L, x_R] \) to a standardized reference element domain \( [-1,1] \) with local coordinate \( X \)

We use affine mapping: linear stretch of \( X\in [-1,1] \) to \( x\in [x_L,x_R] \)

 
$$ x = \half (x_L + x_R) + \half (x_R - x_L)X $$

 
or rewritten as

 
$$ x = x_m + {\half}hX, \qquad x_m=(x_L+x_R)/2,\quad h=x_R-x_L $$

 

Integral transformation

Reference element integration: just change integration variable from \( x \) to \( X \). Introduce local basis function

 
$$ \refphi_r(X) = \basphi_{q(e,r)}(x(X)) $$

 

 
$$ \tilde A^{(e)}_{r,s} = \int_{\Omega^{(e)}}\basphi_{q(e,r)}(x)\basphi_{q(e,s)}(x)dx = \int\limits_{-1}^1 \refphi_r(X)\refphi_s(X)\underbrace{\frac{dx}{dX}}_{\det J = h/2}dX = \int\limits_{-1}^1 \refphi_r(X)\refphi_s(X)\det J\,dX $$

 

 
$$ \tilde b^{(e)}_{r} = \int_{\Omega^{(e)}}f(x)\basphi_{q(e,r)}(x)dx = \int\limits_{-1}^1 f(x(X))\refphi_r(X)\det J\,dX $$

 

Advantages of the reference element

  • Always the same domain for integration: \( [-1,1] \)
  • We only need formulas for \( \refphi_r(X) \) over one element (no piecewise polynomial definition)
  • \( \refphi_r(X) \) is the same for all elements: no dependence on element length and location, which is "factored out" in the mapping and \( \det J \)

Standardized basis functions for P1 elements

 
$$ \begin{align} \refphi_0(X) &= \half (1 - X) \tag{4}\\ \refphi_1(X) &= \half (1 + X) \tag{5} \end{align} $$

 

Note: simple polynomial expressions (no need to consider piecewisely defined functions)

Standardized basis functions for P2 elements

 
$$ \begin{align} \refphi_0(X) &= \half (X-1)X\\ \refphi_1(X) &= 1 - X^2\\ \refphi_2(X) &= \half (X+1)X \end{align} $$

 

Easy to generalize to arbitrary order!

How to find the polynomial expressions?

Three alternatives:

  1. Map the global basis function \( \basphi_i(x) \) over an element to \( X \) coordinates
  2. Compute \( \refphi_r(X) \) from scratch using
    1. a given polynomial order \( d \)
    2. \( \refphi_r(X)=1 \) at local node 1
    3. \( \refphi_r(X)=1 \) at all other local nodes

  3. Use formulas for Lagrange interpolating polynomials on the element

Integration over a reference element; element matrix

P1 elements and \( f(x)=x(1-x) \).

 
$$ \begin{align} \tilde A^{(e)}_{0,0} &= \int_{-1}^1 \refphi_0(X)\refphi_0(X)\frac{h}{2} dX\nonumber\\ &=\int_{-1}^1 \half(1-X)\half(1-X) \frac{h}{2} dX = \frac{h}{8}\int_{-1}^1 (1-X)^2 dX = \frac{h}{3} \tag{6}\\ \tilde A^{(e)}_{1,0} &= \int_{-1}^1 \refphi_1(X)\refphi_0(X)\frac{h}{2} dX\nonumber\\ &=\int_{-1}^1 \half(1+X)\half(1-X) \frac{h}{2} dX = \frac{h}{8}\int_{-1}^1 (1-X^2) dX = \frac{h}{6}\\ \tilde A^{(e)}_{0,1} &= \tilde A^{(e)}_{1,0} \tag{7}\\ \tilde A^{(e)}_{1,1} &= \int_{-1}^1 \refphi_1(X)\refphi_1(X)\frac{h}{2} dX\nonumber\\ &=\int_{-1}^1 \half(1+X)\half(1+X) \frac{h}{2} dX = \frac{h}{8}\int_{-1}^1 (1+X)^2 dX = \frac{h}{3} \tag{8} \end{align} $$

 

Integration over a reference element; element vector

 
$$ \begin{align} \tilde b^{(e)}_{0} &= \int_{-1}^1 f(x(X))\refphi_0(X)\frac{h}{2} dX\nonumber\\ &= \int_{-1}^1 (x_m + \half hX)(1-(x_m + \half hX)) \half(1-X)\frac{h}{2} dX \nonumber\\ &= - \frac{1}{24} h^{3} + \frac{1}{6} h^{2} x_{m} - \frac{1}{12} h^{2} - \half h x_{m}^{2} + \half h x_{m} \tag{9}\\ \tilde b^{(e)}_{1} &= \int_{-1}^1 f(x(X))\refphi_1(X)\frac{h}{2} dX\nonumber\\ &= \int_{-1}^1 (x_m + \half hX)(1-(x_m + \half hX)) \half(1+X)\frac{h}{2} dX \nonumber\\ &= - \frac{1}{24} h^{3} - \frac{1}{6} h^{2} x_{m} + \frac{1}{12} h^{2} - \half h x_{m}^{2} + \half h x_{m} \end{align} $$

 

\( x_m \): element midpoint.

Tedious calculations! Let's use symbolic software

>>> import sympy as sp
>>> x, x_m, h, X = sp.symbols('x x_m h X')
>>> sp.integrate(h/8*(1-X)**2, (X, -1, 1))
h/3
>>> sp.integrate(h/8*(1+X)*(1-X), (X, -1, 1))
h/6
>>> x = x_m + h/2*X
>>> b_0 = sp.integrate(h/4*x*(1-x)*(1-X), (X, -1, 1))
>>> print b_0
-h**3/24 + h**2*x_m/6 - h**2/12 - h*x_m**2/2 + h*x_m/2

Can print out in LaTeX too (convenient for copying into reports):

>>> print sp.latex(b_0, mode='plain')
- \frac{1}{24} h^{3} + \frac{1}{6} h^{2} x_{m}
- \frac{1}{12} h^{2} - \half h x_{m}^{2}
+ \half h x_{m}

Implementation

  • Coming functions appear in fe_approx1D.py
  • Functions can operate in symbolic or numeric mode
  • The code documents all steps in finite element calculations!

Compute finite element basis functions in the reference element

Let \( \refphi_r(X) \) be a Lagrange polynomial of degree d:

import sympy as sp
import numpy as np

def phi_r(r, X, d):
    if isinstance(X, sp.Symbol):
        h = sp.Rational(1, d)  # node spacing
        nodes = [2*i*h - 1 for i in range(d+1)]
    else:
        # assume X is numeric: use floats for nodes
        nodes = np.linspace(-1, 1, d+1)
    return Lagrange_polynomial(X, r, nodes)

def Lagrange_polynomial(x, i, points):
    p = 1
    for k in range(len(points)):
        if k != i:
            p *= (x - points[k])/(points[i] - points[k])
    return p

def basis(d=1):
    """Return the complete basis."""
    X = sp.Symbol('X')
    phi = [phi_r(r, X, d) for r in range(d+1)]
    return phi

Compute the element matrix

def element_matrix(phi, Omega_e, symbolic=True):
    n = len(phi)
    A_e = sp.zeros((n, n))
    X = sp.Symbol('X')
    if symbolic:
        h = sp.Symbol('h')
    else:
        h = Omega_e[1] - Omega_e[0]
    detJ = h/2  # dx/dX
    for r in range(n):
        for s in range(r, n):
            A_e[r,s] = sp.integrate(phi[r]*phi[s]*detJ, (X, -1, 1))
            A_e[s,r] = A_e[r,s]
    return A_e

Example on symbolic vs numeric element matrix

>>> from fe_approx1D import *
>>> phi = basis(d=1)
>>> phi
[1/2 - X/2, 1/2 + X/2]
>>> element_matrix(phi, Omega_e=[0.1, 0.2], symbolic=True)
[h/3, h/6]
[h/6, h/3]
>>> element_matrix(phi, Omega_e=[0.1, 0.2], symbolic=False)
[0.0333333333333333, 0.0166666666666667]
[0.0166666666666667, 0.0333333333333333]

Compute the element vector

def element_vector(f, phi, Omega_e, symbolic=True):
    n = len(phi)
    b_e = sp.zeros((n, 1))
    # Make f a function of X
    X = sp.Symbol('X')
    if symbolic:
        h = sp.Symbol('h')
    else:
        h = Omega_e[1] - Omega_e[0]
    x = (Omega_e[0] + Omega_e[1])/2 + h/2*X  # mapping
    f = f.subs('x', x)  # substitute mapping formula for x
    detJ = h/2  # dx/dX
    for r in range(n):
        b_e[r] = sp.integrate(f*phi[r]*detJ, (X, -1, 1))
    return b_e

Note f.subs('x', x): replace x by \( x(X) \) such that f contains X

Fallback on numerical integration if symbolic integration of \( \int f\refphi_r dx \) fails

  • Element matrix: only polynomials and sympy always succeeds
  • Element vector: \( \int f\refphi \dx \) can fail (sympy then returns an Integral object instead of a number)

def element_vector(f, phi, Omega_e, symbolic=True):
        ...
        I = sp.integrate(f*phi[r]*detJ, (X, -1, 1))  # try...
        if isinstance(I, sp.Integral):
            h = Omega_e[1] - Omega_e[0]  # Ensure h is numerical
            detJ = h/2
            integrand = sp.lambdify([X], f*phi[r]*detJ)
            I = sp.mpmath.quad(integrand, [-1, 1])
        b_e[r] = I
        ...

Linear system assembly and solution

def assemble(nodes, elements, phi, f, symbolic=True):
    N_n, N_e = len(nodes), len(elements)
    zeros = sp.zeros if symbolic else np.zeros
    A = zeros((N_n, N_n))
    b = zeros((N_n, 1))
    for e in range(N_e):
        Omega_e = [nodes[elements[e][0]], nodes[elements[e][-1]]]

        A_e = element_matrix(phi, Omega_e, symbolic)
        b_e = element_vector(f, phi, Omega_e, symbolic)

        for r in range(len(elements[e])):
            for s in range(len(elements[e])):
                A[elements[e][r],elements[e][s]] += A_e[r,s]
            b[elements[e][r]] += b_e[r]
    return A, b

Linear system solution

if symbolic:
    c = A.LUsolve(b)           # sympy arrays, symbolic Gaussian elim.
else:
    c = np.linalg.solve(A, b)  # numpy arrays, numerical solve

Note: the symbolic computation of A, b and A.LUsolve(b) can be very tedious.

Example on computing symbolic approximations

>>> h, x = sp.symbols('h x')
>>> nodes = [0, h, 2*h]
>>> elements = [[0, 1], [1, 2]]
>>> phi = basis(d=1)
>>> f = x*(1-x)
>>> A, b = assemble(nodes, elements, phi, f, symbolic=True)
>>> A
[h/3,   h/6,   0]
[h/6, 2*h/3, h/6]
[  0,   h/6, h/3]
>>> b
[     h**2/6 - h**3/12]
[      h**2 - 7*h**3/6]
[5*h**2/6 - 17*h**3/12]
>>> c = A.LUsolve(b)
>>> c
[                           h**2/6]
[12*(7*h**2/12 - 35*h**3/72)/(7*h)]
[  7*(4*h**2/7 - 23*h**3/21)/(2*h)]

Example on computing numerical approximations

>>> nodes = [0, 0.5, 1]
>>> elements = [[0, 1], [1, 2]]
>>> phi = basis(d=1)
>>> x = sp.Symbol('x')
>>> f = x*(1-x)
>>> A, b = assemble(nodes, elements, phi, f, symbolic=False)
>>> A
[ 0.166666666666667, 0.0833333333333333,                  0]
[0.0833333333333333,  0.333333333333333, 0.0833333333333333]
[                 0, 0.0833333333333333,  0.166666666666667]
>>> b
[          0.03125]
[0.104166666666667]
[          0.03125]
>>> c = A.LUsolve(b)
>>> c
[0.0416666666666666]
[ 0.291666666666667]
[0.0416666666666666]

The structure of the coefficient matrix

>>> d=1; N_e=8; Omega=[0,1]  # 8 linear elements on [0,1]
>>> phi = basis(d)
>>> f = x*(1-x)
>>> nodes, elements = mesh_symbolic(N_e, d, Omega)
>>> A, b = assemble(nodes, elements, phi, f, symbolic=True)
>>> A
[h/3,   h/6,     0,     0,     0,     0,     0,     0,   0]
[h/6, 2*h/3,   h/6,     0,     0,     0,     0,     0,   0]
[  0,   h/6, 2*h/3,   h/6,     0,     0,     0,     0,   0]
[  0,     0,   h/6, 2*h/3,   h/6,     0,     0,     0,   0]
[  0,     0,     0,   h/6, 2*h/3,   h/6,     0,     0,   0]
[  0,     0,     0,     0,   h/6, 2*h/3,   h/6,     0,   0]
[  0,     0,     0,     0,     0,   h/6, 2*h/3,   h/6,   0]
[  0,     0,     0,     0,     0,     0,   h/6, 2*h/3, h/6]
[  0,     0,     0,     0,     0,     0,     0,   h/6, h/3]

Note: do this by hand to understand what is going on!

General result: the coefficient matrix is sparse

  • Sparse = most of the entries are zeros
  • Below: P1 elements

 
$$ A = \frac{h}{6} \left( \begin{array}{cccccccccc} 2 & 1 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ 1 & 4 & 1 & \ddots & & & & & \vdots \\ 0 & 1 & 4 & 1 & \ddots & & & & \vdots \\ \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & 0 & 1 & 4 & 1 & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ \vdots & & & & &\ddots & 1 & 4 & 1 \\ 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & 1 & 2 \end{array} \right) $$

 

Exemplifying the sparsity for P2 elements

 
$$ A = \frac{h}{30} \left( \begin{array}{ccccccccc} 4 & 2 & - 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 2 & 16 & 2 & 0 & 0 & 0 & 0 & 0 & 0\\- 1 & 2 & 8 & 2 & - 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 2 & 16 & 2 & 0 & 0 & 0 & 0\\ 0 & 0 & - 1 & 2 & 8 & 2 & - 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 2 & 16 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 & - 1 & 2 & 8 & 2 & - 1 \\0 & 0 & 0 & 0 & 0 & 0 & 2 & 16 & 2\\0 & 0 & 0 & 0 & 0 & 0 & - 1 & 2 & 4 \end{array} \right) $$

 

Matrix sparsity pattern for regular/random numbering of P1 elements

  • Left: number nodes and elements from left to right
  • Right: number nodes and elements arbitrarily

Matrix sparsity pattern for regular/random numbering of P3 elements

  • Left: number nodes and elements from left to right
  • Right: number nodes and elements arbitrarily

Sparse matrix storage and solution

The minimum storage requirements for the coefficient matrix \( A_{i,j} \):

  • P1 elements: only 3 nonzero entries per row
  • P2 elements: only 5 nonzero entries per row
  • P3 elements: only 7 nonzero entries per row
  • It is important to utilize sparse storage and sparse solvers
  • In Python: scipy.sparse package

Approximate \( f\sim x^9 \) by various elements; code

Compute a mesh with \( N_e \) elements, basis functions of degree \( d \), and approximate a given symbolic expression \( f(x) \) by a finite element expansion \( u(x) = \sum_jc_j\basphi_j(x) \):

import sympy as sp
from fe_approx1D import approximate
x = sp.Symbol('x')

approximate(f=x*(1-x)**8, symbolic=False, d=1, N_e=4)
approximate(f=x*(1-x)**8, symbolic=False, d=2, N_e=2)
approximate(f=x*(1-x)**8, symbolic=False, d=1, N_e=8)
approximate(f=x*(1-x)**8, symbolic=False, d=2, N_e=4)

Approximate \( f\sim x^9 \) by various elements; plot

Comparison of finite element and finite difference approximation

  • Finite difference approximation \( u_i \) of a function \( f(x) \): simply choose \( u_i = f(x_i) \)
  • This is the same as \( u\approx\sum_i c_i\basphi_i \) + interpolation
    (see next slide)
  • \( u\approx\sum_i c_i\basphi_i \) + Galerkin/projection or least squares method: must derive and solve a linear system
  • What is really the difference in the approximation \( u \)?

Interpolation/collocation with finite elements

Let \( \{\xno{i}\}_{i\in\If} \) be the nodes in the mesh. Collocation/interpolation means

 
$$ u(\xno{i})=f(\xno{i}),\quad i\in\If, $$

 
which translates to

 
$$ \sum_{j\in\If} c_j \basphi_j(\xno{i}) = f(\xno{i}),$$

 
but \( \basphi_j(\xno{i})=0 \) if \( i\neq j \) so the sum collapses to one term \( c_i\basphi_i(\xno{i}) = c_i \), and we have the result

 
$$ c_i = f(\xno{i}) $$

 

Same result as the standard finite difference approach, but finite elements define \( u \) also between the \( \xno{i} \) points

Galerkin/project and least squares vs collocation/interpolation or finite differences

  • Scope: work with P1 elements
  • Use projection/Galerkin or least squares (equivalent)
  • Interpret the resulting linear system as finite difference equations

The P1 finite element machinery results in a linear system where equation no \( i \) is

 
$$ \frac{h}{6}(u_{i-1} + 4u_i + u_{i+1}) = (f,\basphi_i) $$

 

Note:

  • We have used \( u_i \) for \( c_i \) to make notation similar to finite differences
  • The finite difference counterpart is just \( u_i=f_i \)

Expressing the left-hand side in finite difference operator notation

Rewrite the left-hand side of finite element equation no \( i \):

 
$$ h(u_i + \frac{1}{6}(u_{i-1} - 2u_i + u_{i+1})) = [h(u + \frac{h^2}{6}D_x D_x u)]_i $$

 
This is the standard finite difference approximation of

 
$$ h(u + \frac{h^2}{6}u'')$$

 

Treating the right-hand side; Trapezoidal rule

 
$$ (f,\basphi_i) = \int_{\xno{i-1}}^{\xno{i}} f(x)\frac{1}{h} (x - \xno{i-1}) dx + \int_{\xno{i}}^{\xno{i+1}} f(x)\frac{1}{h}(1 - (x - x_{i})) dx $$

 
Cannot do much unless we specialize \( f \) or use numerical integration.

Trapezoidal rule using the nodes:

 
$$ (f,\basphi_i) = \int_\Omega f\basphi_i dx\approx h\half( f(\xno{0})\basphi_i(\xno{0}) + f(\xno{N})\basphi_i(\xno{N})) + h\sum_{j=1}^{N-1} f(\xno{j})\basphi_i(\xno{j}) $$

 
\( \basphi_i(\xno{j})=\delta_{ij} \), so this formula collapses to one term:

 
$$ (f,\basphi_i) \approx hf(\xno{i}),\quad i=1,\ldots,N-1\thinspace. $$

 

Same result as in collocation (interpolation) and the finite difference method!

Treating the right-hand side; Simpson's rule

 
$$ \int_\Omega g(x)dx \approx \frac{h}{6}\left( g(\xno{0}) + 2\sum_{j=1}^{N-1} g(\xno{j}) + 4\sum_{j=0}^{N-1} g(\xno{j+\half}) + f(\xno{2N})\right), $$

 
Our case: \( g=f\basphi_i \). The sums collapse because \( \basphi_i=0 \) at most of the points.

 
$$ (f,\basphi_i) \approx \frac{h}{3}(f_{i-\half} + f_i + f_{i+\half}) $$

 

Conclusions:

  • While the finite difference method just samples \( f \) at \( x_i \), the finite element method applies an average (smoothing) of \( f \) around \( x_i \)
  • On the left-hand side we have a term \( \sim hu'' \), and \( u'' \) also contribute to smoothing
  • There is some inherent smoothing in the finite element method

Finite element approximation vs finite differences

With Trapezoidal integration of \( (f,\basphi_i) \), the finite element method essentially solve

 
$$ u + \frac{h^2}{6} u'' = f,\quad u'(0)=u'(L)=0, $$

 
by the finite difference method

 
$$ [u + \frac{h^2}{6} D_x D_x u = f]_i $$

 

With Simpson integration of \( (f,\basphi_i) \) we essentially solve

 
$$ [u + \frac{h^2}{6} D_x D_x u = \bar f]_i, $$

 
where

 
$$ \bar f_i = \frac{1}{3}(f_{i-1/2} + f_i + f_{i+1/2}) $$

 

Note: as \( h\rightarrow 0 \), \( hu''\rightarrow 0 \) and \( \bar f_i\rightarrow f_i \).

Making finite elements behave as finite differences

  • Can we adjust the finite element method so that we do not get the extra \( hu'' \) smoothing term and averaging of \( f \)?
  • This allows finite elements to inherit (desired) properties of finite differences

Result:

  • Compute all integrals by the Trapezoidal method and P1 elements
  • Specifically, the coefficient matrix becomes diagonal ("lumped") - no linear system (!)
  • Loss of accuracy? The Trapezoidal rule has error \( \Oof{h^2} \), the same as the approximation error in P1 elements

Limitations of the nodes and element concepts

So far,

  • Nodes: points for defining \( \basphi_i \) and computing \( u \) values
  • Elements: subdomain (containing a few nodes)
  • This is a common notion of nodes and elements

One problem:

  • Our algorithms need nodes at the element boundaries
  • This is often not desirable, so we need to throw the nodes and elements arrays away and find a more generalized element concept

The generalized element concept has cells, vertices, nodes, and degrees of freedom

  • We introduce cell for the subdomain that we up to now called element
  • A cell has vertices (interval end points)
  • Nodes are, almost as before, points where we want to compute unknown functions
  • Degrees of freedom is what the \( c_j \) represent (usually function values at nodes)

The concept of a finite element

  1. a reference cell in a local reference coordinate system
  2. a set of basis functions \( \refphi_r \) defined on the cell
  3. a set of degrees of freedom (e.g., function values) that uniquely determine the basis functions such that \( \refphi_r=1 \) for degree of freedom number \( r \) and \( \refphi_r=0 \) for all other degrees of freedom
  4. a mapping between local and global degree of freedom numbers (dof map)
  5. a geometric mapping of the reference cell onto to cell in the physical domain: \( [-1,1]\ \Rightarrow\ [x_L,x_R] \)

Basic data structures: vertices, cells, dof_map

  • Cell vertex coordinates: vertices (equals nodes for P1 elements)
  • Element vertices: cells[e][r] holds global vertex number of local vertex no r in element e (same as elements for P1 elements)
  • dof_map[e,r] maps local dof r in element e to global dof number (same as elements for Pd elements)

The assembly process now applies dof_map:

A[dof_map[e][r], dof_map[e][s]] += A_e[r,s]
b[dof_map[e][r]] += b_e[r]

Example: data structures for P2 elements

vertices = [0, 0.4, 1]
cells = [[0, 1], [1, 2]]
dof_map = [[0, 1, 2], [2, 3, 4]]

Example: P0 elements

Example: Same mesh, but \( u \) is piecewise constant in each cell (P0 element). Same vertices and cells, but

dof_map = [[0], [1]]

May think of one node in the middle of each element.

Note:

We will hereafter work with cells, vertices, and dof_map.

A program with the fundamental algorithmic steps

# Use modified fe_approx1D module
from fe_approx1D_numint import *

x = sp.Symbol('x')
f = x*(1 - x)

N_e = 10
# Create mesh with P3 (cubic) elements
vertices, cells, dof_map = mesh_uniform(N_e, d=3, Omega=[0,1])

# Create basis functions on the mesh
phi = [basis(len(dof_map[e])-1) for e in range(N_e)]

# Create linear system and solve it
A, b = assemble(vertices, cells, dof_map, phi, f)
c = np.linalg.solve(A, b)

# Make very fine mesh and sample u(x) on this mesh for plotting
x_u, u = u_glob(c, vertices, cells, dof_map,
                resolution_per_element=51)
plot(x_u, u)

Approximating a parabola by P0 elements

The approximate function automates the steps in the previous slide:

from fe_approx1D_numint import *
x=sp.Symbol("x")
for N_e in 4, 8:
    approximate(x*(1-x), d=0, N_e=N_e, Omega=[0,1])

Computing the error of the approximation; principles

 
$$ L^2 \hbox{ error: }\quad ||e||_{L^2} = \left(\int_{\Omega} e^2 dx\right)^{1/2}$$

 

Accurate approximation of the integral:

  • Sample \( u(x) \) at many points in each element (call u_glob, returns x and u)
  • Use the Trapezoidal rule based on the samples
  • It is important to integrate \( u \) accurately over the elements
  • (In a finite difference method we would just sample the mesh point values)

Computing the error of the approximation; details

Note.

We need a version of the Trapezoidal rule valid for non-uniformly spaced points:

 
$$ \int_\Omega g(x) dx \approx \sum_{j=0}^{n-1} \half(g(x_j) + g(x_{j+1}))(x_{j+1}-x_j)$$

 

# Given c, compute x and u values on a very fine mesh
x, u = u_glob(c, vertices, cells, dof_map,
              resolution_per_element=101)
# Compute the error on the very fine mesh
e = f(x) - u
e2 = e**2
# Vectorized Trapezoidal rule
E = np.sqrt(0.5*np.sum((e2[:-1] + e2[1:])*(x[1:] - x[:-1]))

How does the error depend on \( h \) and \( d \)?

Theory and experiments show that the least squares or projection/Galerkin method in combination with Pd elements of equal length \( h \) has an error

 
$$ ||e||_{L^2} = Ch^{d+1} $$

 
where \( C \) depends on \( f \), but not on \( h \) or \( d \).

Cubic Hermite polynomials; definition

  • Can we construct \( \basphi_i(x) \) with continuous derivatives? Yes!

Consider a reference cell \( [-1,1] \). We introduce two nodes, \( X=-1 \) and \( X=1 \). The degrees of freedom are

  • 0: value of function at \( X=-1 \)
  • 1: value of first derivative at \( X=-1 \)
  • 2: value of function at \( X=1 \)
  • 3: value of first derivative at \( X=1 \)

Derivatives as unknowns ensure the same \( \basphi_i'(x) \) value at nodes and thereby continuous derivatives.

Cubic Hermite polynomials; derivation

4 constraints on \( \refphi_r \) (1 for dof \( r \), 0 for all others):

  • \( \refphi_0(\Xno{0}) = 1 \), \( \refphi_0(\Xno{1}) = 0 \), \( \refphi_0'(\Xno{0}) = 0 \), \( \refphi_0'(\Xno{1}) = 0 \)
  • \( \refphi_1'(\Xno{0}) = 1 \), \( \refphi_1'(\Xno{1}) = 0 \), \( \refphi_1(\Xno{0}) = 0 \), \( \refphi_1(\Xno{1}) = 0 \)
  • \( \refphi_2(\Xno{1}) = 1 \), \( \refphi_2(\Xno{0}) = 0 \), \( \refphi_2'(\Xno{0}) = 0 \), \( \refphi_2'(\Xno{1}) = 0 \)
  • \( \refphi_3'(\Xno{1}) = 1 \), \( \refphi_3'(\Xno{0}) = 0 \), \( \refphi_3(\Xno{0}) = 0 \), \( \refphi_3(\Xno{1}) = 0 \)

This gives 4 linear, coupled equations for each \( \refphi_r \) to determine the 4 coefficients in the cubic polynomial

Cubic Hermite polynomials; result

 
$$ \begin{align} \refphi_0(X) &= 1 - \frac{3}{4}(X+1)^2 + \frac{1}{4}(X+1)^3\\ \refphi_1(X) &= -(X+1)(1 - \half(X+1))^2\\ \refphi_2(X) &= \frac{3}{4}(X+1)^2 - \half(X+1)^3\\ \refphi_3(X) &= -\half(X+1)(\half(X+1)^2 - (X+1))\\ \end{align} $$

 

Numerical integration

  • \( \int_\Omega f\basphi_idx \) must in general be computed by numerical integration
  • Numerical integration is often used for the matrix too

Common form of a numerical integration rule

 
$$ \int_{-1}^{1} g(X)dX \approx \sum_{j=0}^M w_jg(\bar X_j), $$

 
where

  • \( \bar X_j \) are integration points
  • \( w_j \) are integration weights

Different rules correspond to different choices of points and weights

The Midpoint rule

Simplest possibility: the Midpoint rule,

 
$$ \int_{-1}^{1} g(X)dX \approx 2g(0),\quad \bar X_0=0,\ w_0=2, $$

 

Exact for linear integrands

Newton-Cotes rules apply the nodes

  • Idea: use a fixed, uniformly distributed set of points in \( [-1,1] \)
  • The points often coincides with nodes
  • Very useful for making \( \basphi_i\basphi_j=0 \) and get diagonal ("mass") matrices ("lumping")

The Trapezoidal rule:

 
$$ \int_{-1}^{1} g(X)dX \approx g(-1) + g(1),\quad \bar X_0=-1,\ \bar X_1=1,\ w_0=w_1=1, $$

 

Simpson's rule:

 
$$ \int_{-1}^{1} g(X)dX \approx \frac{1}{3}\left(g(-1) + 4g(0) + g(1)\right), $$

 
where

 
$$ \bar X_0=-1,\ \bar X_1=0,\ \bar X_2=1,\ w_0=w_2=\frac{1}{3},\ w_1=\frac{4}{3} $$

 

Gauss-Legendre rules apply optimized points

  • Optimize the location of points to get higher accuracy
  • Gauss-Legendre rules (quadrature) adjust points and weights to integrate polynomials exactly

 
$$ \begin{align} M=1&:\quad \bar X_0=-\frac{1}{\sqrt{3}},\ \bar X_1=\frac{1}{\sqrt{3}},\ w_0=w_1=1\\ M=2&:\quad \bar X_0=-\sqrt{\frac{3}{{5}}},\ \bar X_0=0,\ \bar X_2= \sqrt{\frac{3}{{5}}},\ w_0=w_2=\frac{5}{9},\ w_1=\frac{8}{9} \end{align} $$

 

  • \( M=1 \): integrates 3rd degree polynomials exactly
  • \( M=2 \): integrates 5th degree polynomials exactly
  • In general, \( M \)-point rule integrates a polynomial of degree \( 2M+1 \) exactly.

See numint.py for a large collection of Gauss-Legendre rules.

Approximation of functions in 2D

Extensibility of 1D ideas.

All the concepts and algorithms developed for approximation of 1D functions \( f(x) \) can readily be extended to 2D functions \( f(x,y) \) and 3D functions \( f(x,y,z) \). Key formulas stay the same.

Quick overview of the 2D case

Inner product in 2D:

 
$$ (f,g) = \int_\Omega f(x,y)g(x,y) dx dy $$

 

Least squares and project/Galerkin lead to a linear system

 
$$ \begin{align*} \sum_{j\in\If} A_{i,j}c_j &= b_i,\quad i\in\If\\ A_{i,j} &= (\baspsi_i,\baspsi_j)\\ b_i &= (f,\baspsi_i) \end{align*} $$

 

Challenge: How to construct 2D basis functions \( \baspsi_i(x,y) \)?

2D basis functions as tensor products of 1D functions

Use a 1D basis for \( x \) variation and a similar for \( y \) variation:

 
$$ \begin{align} V_x &= \mbox{span}\{ \hat\baspsi_0(x),\ldots,\hat\baspsi_{N_x}(x)\} \tag{10}\\ V_y &= \mbox{span}\{ \hat\baspsi_0(y),\ldots,\hat\baspsi_{N_y}(y)\} \tag{11} \end{align} $$

 

The 2D vector space can be defined as a tensor product \( V = V_x\otimes V_y \) with basis functions

 
$$ \baspsi_{p,q}(x,y) = \hat\baspsi_p(x)\hat\baspsi_q(y) \quad p\in\Ix,q\in\Iy\tp $$

 

Tensor products

Given two vectors \( a=(a_0,\ldots,a_M) \) and \( b=(b_0,\ldots,b_N) \) their outer tensor product, also called the dyadic product, is \( p=a\otimes b \), defined through

 
$$ p_{i,j}=a_ib_j,\quad i=0,\ldots,M,\ j=0,\ldots,N\tp$$

 
Note: \( p \) has two indices (as a matrix or two-dimensional array)

Example: 2D basis as tensor product of 1D spaces,

 
$$ \baspsi_{p,q}(x,y) = \hat\baspsi_p(x)\hat\baspsi_q(y), \quad p\in\Ix,q\in\Iy$$

 

Double or single index?

The 2D basis can employ a double index and double sum:

 
$$ u = \sum_{p\in\Ix}\sum_{q\in\Iy} c_{p,q}\baspsi_{p,q}(x,y) $$

 

Or just a single index:

 
$$ u = \sum_{j\in\If} c_j\baspsi_j(x,y)$$

 

with an index mapping \( (p,q)\rightarrow i \):

 
$$ \baspsi_i(x,y) = \hat\baspsi_p(x)\hat\baspsi_q(y), \quad i=p (N_y+1) + q\hbox{ or } i=q (N_x+1) + p $$

 

Example on 2D (bilinear) basis functions; formulas

In 1D we use the basis

 
$$ \{ 1, x \} $$

 

2D tensor product (all combinations):

 
$$ \baspsi_{0,0}=1,\quad \baspsi_{1,0}=x, \quad \baspsi_{0,1}=y, \quad \baspsi_{1,1}=xy $$

 
or with a single index:

 
$$ \baspsi_0=1,\quad \baspsi_1=x, \quad \baspsi_2=y,\quad\baspsi_3 =xy $$

 

See notes for details of a hand-calculation.

Example on 2D (bilinear) basis functions; plot

Quadratic \( f(x,y) = (1+x^2)(1+2y^2) \) (left), bilinear \( u \) (right):

Implementation; principal changes to the 1D code

Very small modification of approx1D.py:

  • Omega = [[0, L_x], [0, L_y]]
  • Symbolic integration in 2D
  • Construction of 2D (tensor product) basis functions

Implementation; 2D integration

import sympy as sp

integrand = psi[i]*psi[j]
I = sp.integrate(integrand,
                 (x, Omega[0][0], Omega[0][1]),
                 (y, Omega[1][0], Omega[1][1]))

# Fall back on numerical integration if symbolic integration
# was unsuccessful
if isinstance(I, sp.Integral):
    integrand = sp.lambdify([x,y], integrand)
    I = sp.mpmath.quad(integrand,
                       [Omega[0][0], Omega[0][1]],
                       [Omega[1][0], Omega[1][1]])

Implementation; 2D basis functions

Tensor product of 1D "Taylor-style" polynomials \( x^i \):

def taylor(x, y, Nx, Ny):
    return [x**i*y**j for i in range(Nx+1) for j in range(Ny+1)]

Tensor product of 1D sine functions \( \sin((i+1)\pi x) \):

def sines(x, y, Nx, Ny):
    return [sp.sin(sp.pi*(i+1)*x)*sp.sin(sp.pi*(j+1)*y)
            for i in range(Nx+1) for j in range(Ny+1)]

Complete code in approx2D.py

Implementation; application

\( f(x,y) = (1+x^2)(1+2y^2) \)

>>> from approx2D import *
>>> f = (1+x**2)*(1+2*y**2)
>>> psi = taylor(x, y, 1, 1)
>>> Omega = [[0, 2], [0, 2]]
>>> u, c = least_squares(f, psi, Omega)
>>> print u
8*x*y - 2*x/3 + 4*y/3 - 1/9
>>> print sp.expand(f)
2*x**2*y**2 + x**2 + 2*y**2 + 1

Implementation; trying a perfect expansion

Add higher powers to the basis such that \( f\in V \):

>>> psi = taylor(x, y, 2, 2)
>>> u, c = least_squares(f, psi, Omega)
>>> print u
2*x**2*y**2 + x**2 + 2*y**2 + 1
>>> print u-f
0

Expected: \( u=f \) when \( f\in V \)

Generalization to 3D

Key idea:

 
$$ V = V_x\otimes V_y\otimes V_z$$

 

Repeated outer tensor product of multiple vectors.

 
$$ \begin{align*} a^{(q)} &= (a^{(q)}_0,\ldots,a^{(q)}_{N_q}),\quad q=0,\ldots,m\\ p &= a^{(0)}\otimes\cdots\otimes a^{(m)}\\ p_{i_0,i_1,\ldots,i_m} &= a^{(0)}_{i_1}a^{(1)}_{i_1}\cdots a^{(m)}_{i_m} \end{align*} $$

 

 
$$ \begin{align*} \baspsi_{p,q,r}(x,y,z) &= \hat\baspsi_p(x)\hat\baspsi_q(y)\hat\baspsi_r(z)\\ u(x,y,z) &= \sum_{p\in\Ix}\sum_{q\in\Iy}\sum_{r\in\Iz} c_{p,q,r} \baspsi_{p,q,r}(x,y,z) \end{align*} $$

 

Finite elements in 2D and 3D

The two great advantages of the finite element method:

  • Can handle complex-shaped domains in 2D and 3D
  • Can easily provide higher-order polynomials in the approximation

Finite elements in 1D: mostly for learning, insight, debugging

Examples on cell types

2D:

  • triangles
  • quadrilaterals

3D:

  • tetrahedra
  • hexahedra

Rectangular domain with 2D P1 elements

Deformed geometry with 2D P1 elements

Rectangular domain with 2D Q1 elements

Basis functions over triangles in the physical domain

The P1 triangular 2D element: \( u \) is linear \( ax + by + c \) over each triangular cell

Basic features of 2D elements

  • Cells = triangles
  • Vertices = corners of the cells
  • Nodes = vertices
  • Degrees of freedom = function values at the nodes

Linear mapping of reference element onto general triangular cell

\( \basphi_i \): pyramid shape, composed of planes

  • \( \basphi_i(x,y) \) varies linearly over each cell
  • \( \basphi_i=1 \) at vertex (node) \( i \), 0 at all other vertices (nodes)

Element matrices and vectors

  • As in 1D, the contribution from one cell to the matrix involves just a few entries, collected in the element matrix and vector
  • \( \basphi_i\basphi_j\neq 0 \) only if \( i \) and \( j \) are degrees of freedom (vertices/nodes) in the same element
  • The 2D P1 element has a \( 3\times 3 \) element matrix

Basis functions over triangles in the reference cell

 
$$ \begin{align} \refphi_0(X,Y) &= 1 - X - Y\\ \refphi_1(X,Y) &= X\\ \refphi_2(X,Y) &= Y \end{align} $$

 

Higher-degree \( \refphi_r \) introduce more nodes (dof = node values)

2D P1, P2, P3, P4, P5, and P6 elements

P1 elements in 1D, 2D, and 3D

P2 elements in 1D, 2D, and 3D

  • Interval, triangle, tetrahedron: simplex element (plural quick-form: simplices)
  • Side of the cell is called face
  • Thetrahedron has also edges

Affine mapping of the reference cell; formula

Mapping of local \( \X = (X,Y) \) coordinates in the reference cell to global, physical \( \x = (x,y) \) coordinates:

 
$$ \begin{equation} \x = \sum_{r} \refphi_r^{(1)}(\X)\xdno{q(e,r)} \tag{12} \end{equation} $$

 

where

  • \( r \) runs over the local vertex numbers in the cell
  • \( \xdno{i} \) are the \( (x,y) \) coordinates of vertex \( i \)
  • \( \refphi_r^{(1)} \) are P1 basis functions

This mapping preserves the straight/planar faces and edges.

Affine mapping of the reference cell; figure

Isoparametric mapping of the reference cell

Idea: Use the basis functions of the element (not only the P1 functions) to map the element

 
$$ \x = \sum_{r} \refphi_r(\X)\xdno{q(e,r)} $$

 

Advantage: higher-order polynomial basis functions now map the reference cell to a curved triangle or tetrahedron.

Computing integrals

Integrals must be transformed from \( \Omega^{(e)} \) (physical cell) to \( \tilde\Omega^r \) (reference cell):

 
$$ \begin{align} \int_{\Omega^{(e)}}\basphi_i (\x) \basphi_j (\x) \dx &= \int_{\tilde\Omega^r} \refphi_i (\X) \refphi_j (\X) \det J\, \dX\\ \int_{\Omega^{(e)}}\basphi_i (\x) f(\x) \dx &= \int_{\tilde\Omega^r} \refphi_i (\X) f(\x(\X)) \det J\, \dX \end{align} $$

 
where \( \dx = dx dy \) or \( \dx = dxdydz \) and \( \det J \) is the determinant of the Jacobian of the mapping \( \x(\X) \).

 
$$ J = \left[\begin{array}{cc} \frac{\partial x}{\partial X} & \frac{\partial x}{\partial Y}\\ \frac{\partial y}{\partial X} & \frac{\partial y}{\partial Y} \end{array}\right], \quad \det J = \frac{\partial x}{\partial X}\frac{\partial y}{\partial Y} - \frac{\partial x}{\partial Y}\frac{\partial y}{\partial X} $$

 

Affine mapping (12): \( \det J=2\Delta \), \( \Delta = \hbox{cell volume} \)

Remark on going from 1D to 2D/3D

Finite elements in 2D and 3D builds on the same ideas and concepts as in 1D, but there is simply much more to compute because the specific mathematical formulas in 2D and 3D are more complicated and the book keeping with dof maps also gets more complicated. The manual work is tedious, lengthy, and error-prone so automation by the computer is a must.

Basic principles for approximating differential equations

We shall apply least squares, Galerkin/projection, and collocation to differential equation models

Our aim is to extend the ideas for approximating \( f \) by \( u \), or solving

 
$$ u = f $$

 

to real, spatial differential equations like

 
$$ -u'' + bu = f,\quad u(0)=C,\ u'(L)=D $$

 

Emphasis will be on the Galerkin/projection method

Abstract differential equation

 
$$ \begin{equation*} \mathcal{L}(u) = 0,\quad x\in\Omega \end{equation*} $$

 

Examples (1D problems):

 
$$ \begin{align*} \mathcal{L}(u) &= \frac{d^2u}{dx^2} - f(x),\\ \mathcal{L}(u) &= \frac{d}{dx}\left(\dfc(x)\frac{du}{dx}\right) + f(x),\\ \mathcal{L}(u) &= \frac{d}{dx}\left(\dfc(u)\frac{du}{dx}\right) - au + f(x),\\ \mathcal{L}(u) &= \frac{d}{dx}\left(\dfc(u)\frac{du}{dx}\right) + f(u,x) \end{align*} $$

 

Abstract boundary conditions

 
$$ \begin{equation*} \mathcal{B}_0(u)=0,\ x=0,\quad \mathcal{B}_1(u)=0,\ x=L \end{equation*} $$

 

Examples:

 
$$ \begin{align*} \mathcal{B}_i(u) &= u - g,\quad &\hbox{Dirichlet condition}\\ \mathcal{B}_i(u) &= -\dfc \frac{du}{dx} - g,\quad &\hbox{Neumann condition}\\ \mathcal{B}_i(u) &= -\dfc \frac{du}{dx} - h(u-g),\quad &\hbox{Robin condition} \end{align*} $$

 

Reminder about notation

  • \( \uex(x) \) is the symbol for the exact solution of \( \mathcal{L}(\uex)=0 \) + \( \mathcal{B}_i=0 \)
  • \( u(x) \) denotes an approximate solution
  • \( V = \hbox{span}\{ \baspsi_0(x),\ldots,\baspsi_N(x)\} \), \( V \) has basis \( \sequencei{\baspsi} \)
  • We seek \( u\in V \)
  • \( \If =\{0,\ldots,N\} \) is an index set
  • \( u(x) = \sum_{j\in\If} c_j\baspsi_j(x) \)
  • Inner product: \( (u,v) = \int_\Omega uv\dx \)
  • Norm: \( ||u||=\sqrt{(u,u)} \)

New topics: variational formulation and boundary conditions

Much is similar to approximating a function (solving \( u=f \)), but two new topics are needed:

  • Variational formulation of the differential equation problem (including integration by parts)
  • Handling of boundary conditions

Residual-minimizing principles

  • When solving \( u=f \) we knew the error \( e=f-u \) and could use principles for minimizing the error
  • When solving \( \mathcal{L}(\uex)=0 \) we do not know \( \uex \) and cannot work with the error \( e=\uex - u \)
  • We can only know the error in the equation: the residual \( R \)

Inserting \( u=\sum_jc_j\baspsi_j \) in \( \mathcal{L}=0 \) gives a residual \( R \)

 
$$ \begin{equation*} \mathcal{L}(u) = \mathcal{L}(\sum_j c_j \baspsi_j) = R \neq 0 \end{equation*} $$

 

Goal: minimize \( R \) with respect to \( \sequencei{c} \) (and hope it makes a small \( e \) too)

 
$$ R=R(c_0,\ldots,c_N; x)$$

 

The least squares method

Idea: minimize

 
$$ \begin{equation*} E = ||R||^2 = (R,R) = \int_{\Omega} R^2 dx \end{equation*} $$

 

Minimization wrt \( \sequencei{c} \) implies

 
$$ \frac{\partial E}{\partial c_i} = \int_{\Omega} 2R\frac{\partial R}{\partial c_i} dx = 0\quad \Leftrightarrow\quad (R,\frac{\partial R}{\partial c_i})=0,\quad i\in\If $$

 

\( N+1 \) equations for \( N+1 \) unknowns \( \sequencei{c} \)

The Galerkin method

Idea: make \( R \) orthogonal to \( V \),

 
$$ (R,v)=0,\quad \forall v\in V $$

 

This implies

 
$$ (R,\baspsi_i)=0,\quad i\in\If $$

 

\( N+1 \) equations for \( N+1 \) unknowns \( \sequencei{c} \)

The Method of Weighted Residuals

Generalization of the Galerkin method: demand \( R \) orthogonal to some space \( W \), possibly \( W\neq V \):

 
$$ (R,v)=0,\quad \forall v\in W $$

 

If \( \{w_0,\ldots,w_N\} \) is a basis for \( W \):

 
$$ (R,w_i)=0,\quad i\in\If $$

 

  • \( N+1 \) equations for \( N+1 \) unknowns \( \sequencei{c} \)
  • Weighted residual with \( w_i = \partial R/\partial c_i \) gives least squares

New terminology: test and trial functions

  • \( \baspsi_j \) used in \( \sum_jc_j\baspsi_j \) is called trial function
  • \( \baspsi_i \) or \( w_i \) used as weight in Galerkin's method is called test function

The collocation method

Idea: demand \( R=0 \) at \( N+1 \) points in space

 
$$ R(\xno{i}; c_0,\ldots,c_N)=0,\quad i\in\If$$

 

The collocation method is a weighted residual method with delta functions as weights

 
$$ 0 = \int_\Omega R(x;c_0,\ldots,c_N) \delta(x-\xno{i})\dx = R(\xno{i}; c_0,\ldots,c_N)$$

 

 
$$ \hbox{property of } \delta(x):\quad \int_{\Omega} f(x)\delta (x-\xno{i}) dx = f(\xno{i}),\quad \xno{i}\in\Omega $$

 

Examples on using the principles

Goal.

Exemplify the least squares, Galerkin, and collocation methods in a simple 1D problem with global basis functions.

The first model problem

 
$$ -u''(x) = f(x),\quad x\in\Omega=[0,L],\quad u(0)=0,\ u(L)=0$$

 

Basis functions:

 
$$ \baspsi_i(x) = \sinL{i},\quad i\in\If$$

 

Residual:

 
$$ \begin{align*} R(x;c_0,\ldots,c_N) &= u''(x) + f(x),\nonumber\\ &= \frac{d^2}{dx^2}\left(\sum_{j\in\If} c_j\baspsi_j(x)\right) + f(x),\nonumber\\ &= -\sum_{j\in\If} c_j\baspsi_j''(x) + f(x) \end{align*} $$

 

Boundary conditions

Since \( u(0)=u(L)=0 \) we must ensure that all \( \baspsi_i(0)=\baspsi_i(L)=0 \), because then

 
$$ u(0) = \sum_jc_j{\color{red}\baspsi_j(0)} = 0,\quad u(L) = \sum_jc_j{\color{red}\baspsi_j(L)} =0 $$

 

  • \( u \) known: Dirichlet boundary condition
  • \( u' \) known: Neumann boundary condition
  • Must have \( \baspsi_i=0 \) where Dirichlet conditions apply

The least squares method; principle

 
$$ (R,\frac{\partial R}{\partial c_i}) = 0,\quad i\in\If $$

 

 
$$ \begin{equation*} \frac{\partial R}{\partial c_i} = \frac{\partial}{\partial c_i} \left(\sum_{j\in\If} c_j\baspsi_j''(x) + f(x)\right) = \baspsi_i''(x) \end{equation*} $$

 

Because:

 
$$ \frac{\partial}{\partial c_i}\left(c_0\baspsi_0'' + c_1\baspsi_1'' + \cdots + c_{i-1}\baspsi_{i-1}'' + {\color{red}c_i\baspsi_{i}''} + c_{i+1}\baspsi_{i+1}'' + \cdots + c_N\baspsi_N'' \right) = \baspsi_{i}'' $$

 

The least squares method; equation system

 
$$ \begin{equation*} (\sum_j c_j \baspsi_j'' + f,\baspsi_i'')=0,\quad i\in\If \end{equation*} $$

 

Rearrangement:

 
$$ \begin{equation*} \sum_{j\in\If}(\baspsi_i'',\baspsi_j'')c_j = -(f,\baspsi_i''),\quad i\in\If \end{equation*} $$

 

This is a linear system

 
$$ \begin{equation*} \sum_{j\in\If}A_{i,j}c_j = b_i,\quad i\in\If \end{equation*} $$

 

The least squares method; matrix and right-hand side expressions

 
$$ \begin{align*} A_{i,j} &= (\baspsi_i'',\baspsi_j'')\nonumber\\ & = \pi^4(i+1)^2(j+1)^2L^{-4}\int_0^L \sinL{i}\sinL{j}\, dx\nonumber\\ &= \left\lbrace \begin{array}{ll} {1\over2}L^{-3}\pi^4(i+1)^4 & i=j \\ 0, & i\neq j \end{array}\right. \\ b_i &= -(f,\baspsi_i'') = (i+1)^2\pi^2L^{-2}\int_0^Lf(x)\sinL{i}\, dx \end{align*} $$

 

Orthogonality of the basis functions gives diagonal matrix

Useful property of the chosen basis functions:

 
$$ \begin{equation*} \int\limits_0^L \sinL{i}\sinL{j}\, dx = \delta_{ij},\quad \quad\delta_{ij} = \left\lbrace \begin{array}{ll} \half L & i=j \\ 0, & i\neq j \end{array}\right. \end{equation*} $$

 

\( \Rightarrow\ (\baspsi_i'',\baspsi_j'') = \delta_{ij} \), i.e., diagonal \( A_{i,j} \), and we can easily solve for \( c_i \):

 
$$ \begin{equation*} c_i = \frac{2L}{\pi^2(i+1)^2}\int_0^Lf(x)\sinL{i}\, dx \end{equation*} $$

 

Least squares method; solution

Let sympy do the work (\( f(x)=2 \)):

from sympy import *
import sys

i, j = symbols('i j', integer=True)
x, L = symbols('x L')
f = 2
a = 2*L/(pi**2*(i+1)**2)
c_i = a*integrate(f*sin((i+1)*pi*x/L), (x, 0, L))
c_i = simplify(c_i)
print c_i

 
$$ \begin{equation*} c_i = 4 \frac{L^{2} \left(\left(-1\right)^{i} + 1\right)}{\pi^{3} \left(i^{3} + 3 i^{2} + 3 i + 1\right)},\quad u(x) = \sum_{k=0}^{N/2} \frac{8L^2}{\pi^3(2k+1)^3}\sinL{2k} \end{equation*} $$

 

Fast decay: \( c_2 = c_0/27 \), \( c_4=c_0/125 \) - only one term might be good enough:

 
$$ \begin{equation*} u(x) \approx \frac{8L^2}{\pi^3}\sin\left(\pi\frac{x}{L}\right) \end{equation*} $$

 

The Galerkin method; principle

\( R=u''+f \):

 
$$ \begin{equation*} (u''+f,v)=0,\quad \forall v\in V, \end{equation*} $$

 
or rearranged,

 
$$ \begin{equation*} (u'',v) = -(f,v),\quad\forall v\in V \end{equation*} $$

 

This is a variational formulation of the differential equation problem.

\( \forall v\in V \) is equivalent with \( \forall v\in\baspsi_i \), \( i\in\If \), resulting in

 
$$ \begin{equation*} (\sum_{j\in\If} c_j\baspsi_j'', \baspsi_i)=-(f,\baspsi_i),\quad i\in\If \end{equation*} $$

 

 
$$ \begin{equation*} \sum_{j\in\If}(\baspsi_j'', \baspsi_i) c_j=-(f,\baspsi_i),\quad i\in\If \end{equation*} $$

 

The Galerkin method; solution

Since \( \baspsi_i''\propto -\baspsi_i \), Galerkin's method gives the same linear system and the same solution as the least squares method (in this particular example).

The collocation method

\( R=0 \) (i.e.,the differential equation) must be satisfied at \( N+1 \) points:

 
$$ \begin{equation*} -\sum_{j\in\If} c_j\baspsi_j''(\xno{i}) = f(\xno{i}),\quad i\in\If \end{equation*} $$

 

This is a linear system \( \sum_j A_{i,j}=b_i \) with entries

 
$$ \begin{equation*} A_{i,j}=-\baspsi_j''(\xno{i})= (j+1)^2\pi^2L^{-2}\sin\left((j+1)\pi \frac{x_i}{L}\right), \quad b_i=2 \end{equation*} $$

 

Choose: \( N=0 \), \( x_0=L/2 \)

 
$$ c_0=2L^2/\pi^2 $$

 

Comparison of the methods

  • Exact solution: \( u(x)=x(L-x) \)
  • Galerkin or least squares (\( N=0 \)): \( u(x)=8L^2\pi^{-3}\sin (\pi x/L) \)
  • Collocation method (\( N=0 \)): \( u(x)=2L^2\pi^{-2}\sin (\pi x/L) \).

>>> import sympy as sp
>>> # Computing with Dirichlet conditions: -u''=2 and sines
>>> x, L = sp.symbols('x L')
>>> e_Galerkin = x*(L-x) - 8*L**2*sp.pi**(-3)*sp.sin(sp.pi*x/L)
>>> e_colloc = x*(L-x) - 2*L**2*sp.pi**(-2)*sp.sin(sp.pi*x/L)

>>> # Verify max error for x=L/2
>>> dedx_Galerkin = sp.diff(e_Galerkin, x)
>>> dedx_Galerkin.subs(x, L/2)
0
>>> dedx_colloc = sp.diff(e_colloc, x)
>>> dedx_colloc.subs(x, L/2)
0

# Compute max error: x=L/2, evaluate numerical, and simplify
>>> sp.simplify(e_Galerkin.subs(x, L/2).evalf(n=3))
-0.00812*L**2
>>> sp.simplify(e_colloc.subs(x, L/2).evalf(n=3))
0.0473*L**2

Useful techniques

Integration by parts has many advantages

Second-order derivatives will hereafter be integrated by parts

 
$$ \begin{align*} \int_0^L u''(x)v(x) dx &= - \int_0^Lu'(x)v'(x)dx + [vu']_0^L\nonumber\\ &= - \int_0^Lu'(x)v'(x) dx + u'(L)v(L) - u'(0)v(0) \end{align*} $$

 

Motivation:

  • Lowers the order of derivatives
  • Gives more symmetric forms (incl. matrices)
  • Enables easy handling of Neumann boundary conditions
  • Finite element basis functions \( \basphi_i \) have discontinuous derivatives (at cell boundaries) and are not suited for terms with \( \basphi_i'' \)

We use a boundary function to deal with non-zero Dirichlet boundary conditions

  • What about nonzero Dirichlet conditions? Say \( u(L)=D \)
  • We always require \( \baspsi_i(L)=0 \) (i.e., \( \baspsi_i=0 \) where Dirichlet conditions applies)
  • Problem: \( u(L) = \sum_j c_j\baspsi_j(L)=\sum_j c_j\cdot 0=0\neq D \) - always!
  • Solution: \( u(x) = B(x) + \sum_j c_j\baspsi_j(x) \)
  • \( B(x) \): user-constructed boundary function that fulfills the Dirichlet conditions
  • If \( u(L)=D \), make sure \( B(L)=D \)
  • No restrictions of how \( B(x) \) varies in the interior of \( \Omega \)

Example on constructing a boundary function for two Dirichlet conditions

Dirichlet conditions: \( u(0)=C \) and \( u(L)=D \). Choose for example

 
$$ B(x) = \frac{1}{L}(C(L-x) + Dx):\qquad B(0)=C,\ B(L)=D $$

 

 
$$ \begin{equation*} u(x) = B(x) + \sum_{j\in\If} c_j\baspsi_j(x), \end{equation*} $$

 

 
$$ u(0) = B(0)= C,\quad u(L) = B(L) = D $$

 

Example on constructing a boundary function for one Dirichlet conditions

Dirichlet condition: \( u(L)=D \). Choose for example

 
$$ B(x) = D:\qquad B(L)=D $$

 

 
$$ \begin{equation*} u(x) = B(x) + \sum_{j\in\If} c_j\baspsi_j(x), \end{equation*} $$

 

 
$$ u(L) = B(L) = D $$

 

With a \( B(x) \), \( u\not\in V \), but \( \sum_{j}c_j\baspsi_j\in V \)

  • \( \sequencei{\baspsi} \) is a basis for \( V \)
  • \( \sum_{j\in\If}c_j\baspsi_j(x)\in V \)
  • But \( u\not\in V \)!
  • Reason: say \( u(0)=C \) and \( u\in V \); any \( v\in V \) has \( v(0)=C \), then \( 2u\not\in V \) because \( 2u(0)=2C \) (wrong value)
  • When \( u(x) = B(x) + \sum_{j\in\If}c_j\baspsi_j(x) \), \( B\not\in V \) (in general) and \( u\not\in V \), but \( (u-B)\in V \) since \( \sum_{j}c_j\baspsi_j\in V \)

Abstract notation for variational formulations

The finite element literature (and much FEniCS documentation) applies an abstract notation for the variational formulation:

Find \( (u-B)\in V \) such that

 
$$ a(u,v) = L(v)\quad \forall v\in V $$

 

Example on abstract notation

 
$$ -u''=f, \quad u'(0)=C,\ u(L)=D,\quad u=D + \sum_jc_j\baspsi_j$$

 

Variational formulation:

 
$$ \int_{\Omega} u' v'dx = \int_{\Omega} fvdx - v(0)C \quad\hbox{or}\quad (u',v') = (f,v) - v(0)C \quad\forall v\in V $$

 

Abstract formulation: find \( (u-B)\in V \) such that

 
$$ a(u,v) = L(v)\quad \forall v\in V$$

 

We identify

 
$$ a(u,v) = (u',v'),\quad L(v) = (f,v) -v(0)C $$

 

Bilinear and linear forms

  • \( a(u,v) \) is a bilinear form
  • \( L(v) \) is a linear form

Linear form means

 
$$ L(\alpha_1 v_1 + \alpha_2 v_2) =\alpha_1 L(v_1) + \alpha_2 L(v_2), $$

 

Bilinear form means

 
$$ \begin{align*} a(\alpha_1 u_1 + \alpha_2 u_2, v) &= \alpha_1 a(u_1,v) + \alpha_2 a(u_2, v), \\ a(u, \alpha_1 v_1 + \alpha_2 v_2) &= \alpha_1 a(u,v_1) + \alpha_2 a(u, v_2) \end{align*} $$

 

In nonlinear problems: Find \( (u-B)\in V \) such that \( F(u;v)=0\ \forall v\in V \)

The linear system associated with the abstract form

 
$$ a(u,v) = L(v)\quad \forall v\in V\quad\Leftrightarrow\quad a(u,\baspsi_i) = L(\baspsi_i)\quad i\in\If$$

 

We can now derive the corresponding linear system once and for all by inserting \( u = B + \sum_jc_j\baspsi_j \):

 
$$ a(B + \sum_{j\in\If} c_j \baspsi_j,\baspsi_i)c_j = L(\baspsi_i)\quad i\in\If$$

 

Because of linearity,

 
$$ \sum_{j\in\If} \underbrace{a(\baspsi_j,\baspsi_i)}_{A_{i,j}}c_j = \underbrace{L(\baspsi_i) - a(B,\baspsi_i)}_{b_i}\quad i\in\If$$

 

Equivalence with minimization problem

If \( a \) is symmetric: \( a(u,v)=a(v,u) \),

 
$$ a(u,v)=L(v)\quad\forall v\in V$$

 

is equivalent to minimizing the functional

 
$$ F(v) = {\half}a(v,v) - L(v) $$

 
over all functions \( v\in V \). That is,

 
$$ F(u)\leq F(v)\quad \forall v\in V $$

 

  • Much used in the early days of finite elements
  • Still much used in structural analysis and elasticity
  • Not as general as Galerkin's method (since we require \( a(u,v)=a(v,u) \))

Examples on variational formulations

Goal.

Derive variational formulations for some prototype differential equations in 1D that include

  • variable coefficients
  • mixed Dirichlet and Neumann conditions
  • nonlinear coefficients

Variable coefficient; problem

 
$$ \begin{equation*} -\frac{d}{dx}\left( \dfc(x)\frac{du}{dx}\right) = f(x),\quad x\in\Omega =[0,L],\ u(0)=C,\ u(L)=D \end{equation*} $$

 

  • Variable coefficient \( \dfc(x) \)
  • \( V = \hbox{span}\{\baspsi_0,\ldots,\baspsi_N\} \)
  • Nonzero Dirichlet conditions at \( x=0 \) and \( x=L \)
  • Must have \( \baspsi_i(0)=\baspsi_i(L)=0 \)
  • Any \( v\in V \) has then \( v(0)=v(L)=0 \)
  • \( B(x) = C + \frac{1}{L}(D-C)x \)

 
$$ u(x) = B(x) + \sum_{j\in\If} c_j\baspsi_i(x),\quad $$

 

Variable coefficient; Galerkin principle

 
$$ R = -\frac{d}{dx}\left( a\frac{du}{dx}\right) -f $$

 

Galerkin's method:

 
$$ (R, v) = 0,\quad \forall v\in V $$

 

or with integrals:

 
$$ \int_{\Omega} \left(\frac{d}{dx}\left( \dfc\frac{du}{dx}\right) -f\right)v \dx = 0,\quad \forall v\in V $$

 

Variable coefficient; integration by parts

 
$$ -\int_{\Omega} \frac{d}{dx}\left( \dfc(x)\frac{du}{dx}\right) v \dx = \int_{\Omega} \dfc(x)\frac{du}{dx}\frac{dv}{dx}\dx - \left[\dfc\frac{du}{dx}v\right]_0^L $$

 

Boundary terms vanish since \( v(0)=v(L)=0 \)

Variable coefficient; variational formulation

Variational formulation.

Find \( (u-B)\in V \) such that

 
$$ \int_{\Omega} \dfc(x)\frac{du}{dx}\frac{dv}{dx}dx = \int_{\Omega} f(x)vdx,\quad \forall v\in V $$

 

Compact notation:

 
$$ \underbrace{(\dfc u',v')}_{a(u,v)} = \underbrace{(f,v)}_{L(v)}, \quad \forall v\in V $$

 

Variable coefficient; linear system (the easy way)

With

 
$$ a(u,v) = (\dfc u', v'),\quad L(v) = (f,v) $$

 

we can just use the formula for the linear system:

 
$$ \begin{align*} A_{i,j} &= a(\baspsi_j,\baspsi_i) = (\dfc \baspsi_j', \baspsi_i') = \int_\Omega \dfc \baspsi_j' \baspsi_i'\dx = \int_\Omega \baspsi_i' \dfc \baspsi_j'\dx \quad (= a(\baspsi_i,\baspsi_j) = A_{j,i}\\ b_i &= (f,\baspsi_i) - (\dfc B',\baspsi_i) = \int_\Omega (f\baspsi_i - \dfc L^{-1}(D-C)\baspsi_i')\dx \end{align*} $$

 

Variable coefficient; linear system (full derivation)

\( v=\baspsi_i \) and \( u=B + \sum_jc_j\baspsi_j \):

 
$$ (\dfc B' + \dfc \sum_{j\in\If} c_j \baspsi_j', \baspsi_i') = (f,\baspsi_i), \quad i\in\If $$

 

Reorder to form linear system:

 
$$ \sum_{j\in\If} (\dfc\baspsi_j', \baspsi_i')c_j = (f,\baspsi_i) + (aL^{-1}(D-C), \baspsi_i'), \quad i\in\If $$

 

This is \( \sum_j A_{i,j}c_j=b_i \) with

 
$$ \begin{align*} A_{i,j} &= (a\baspsi_j', \baspsi_i') = \int_{\Omega} \dfc(x)\baspsi_j'(x) \baspsi_i'(x)\dx\\ b_i &= (f,\baspsi_i) + (aL^{-1}(D-C),\baspsi_i')= \int_{\Omega} \left(f\baspsi_i + \dfc\frac{D-C}{L}\baspsi_i'\right) \dx \end{align*} $$

 

First-order derivative in the equation and boundary condition; problem

 
$$ -u''(x) + bu'(x) = f(x),\quad x\in\Omega =[0,L],\ u(0)=C,\ u'(L)=E $$

 

New features:

  • first-order derivative \( u' \) in the equation
  • boundary condition with \( u' \): \( u'(L)=E \)

Initial steps:

  • Must force \( \baspsi_i(0)=0 \) because of Dirichlet condition at \( x=0 \)
  • Boundary function: \( B(x)=C(L-x) \) or just \( B(x)=C \)
  • No requirements on \( \baspsi_i(L) \) (no Dirichlet condition at \( x=L \))

First-order derivative in the equation and boundary condition; details

 
$$ u = C + \sum_{j\in\If} c_j \baspsi_i(x)$$

 

Galerkin's method: multiply by \( v \), integrate over \( \Omega \), integrate by parts.

 
$$ (-u'' + bu' - f, v) = 0,\quad\forall v\in V$$

 

 
$$ (u',v') + (bu',v) = (f,v) + [u' v]_0^L, \quad\forall v\in V$$

 

\( [u' v]_0^L = u'(L)v(L) - u'(0)v(0)= E v(L) \) since \( v(0)=0 \) and \( u'(L)=E \)

 
$$ (u'v') + (bu',v) = (f,v) + Ev(L), \quad\forall v\in V$$

 

First-order derivative in the equation and boundary condition; observations

 
$$ (u'v') + (bu',v) = (f,v) + Ev(L), \quad\forall v\in V$$

 

Important observations:

  • The boundary term can be used to implement Neumann conditions
  • Forgetting the boundary term implies the condition \( u'=0 \) (!)
  • Such conditions are called natural boundary conditions

First-order derivative in the equation and boundary condition; abstract notation (optional)

Abstract notation:

 
$$ a(u,v)=L(v)\quad\forall v\in V$$

 

With

 
$$ (u'v') + (bu',v) = (f,v) + Ev(L), \quad\forall v\in V$$

 

we have

 
$$ \begin{align*} a(u,v)&=(u',v') + (bu',v)\\ L(v)&= (f,v) + E v(L) \end{align*} $$

 

First-order derivative in the equation and boundary condition; linear system

Insert \( u=C+\sum_jc_j\baspsi_j \) and \( v=\baspsi_i \) in

 
$$ (u'v') + (bu',v) = (f,v) + Ev(L), \quad\forall v\in V$$

 
and manipulate to get

 
$$ \sum_{j\in\If} \underbrace{((\baspsi_j',\baspsi_i') + (b\baspsi_j',\baspsi_i))}_{A_{i,j}} c_j = \underbrace{(f,\baspsi_i) + E \baspsi_i(L)}_{b_i},\quad i\in\If $$

 

Observation: \( A_{i,j} \) is not symmetric because of the term

 
$$ (b\baspsi_j',\baspsi_i)=\int_{\Omega} b\baspsi_j'\baspsi_i dx \neq \int_{\Omega} b \baspsi_i' \baspsi_jdx = (\baspsi_i',b\baspsi_j) $$

 

Terminology: natural and essential boundary conditions

 
$$ (u',v') + (bu',v) = (f,v) + u'(L)v(L) - u'(0)v(0)$$

 

  • Note: forgetting the boundary terms implies \( u'(L)=u'(0)=0 \) (unless prescribe a Dirichlet condition)
  • Conditions on \( u' \) are simply inserted in the variational form and called natural conditions
  • Conditions on \( u \) at \( x=0 \) requires modifying \( V \) (through \( \baspsi_i(0)=0 \)) and are known as essential conditions

Lesson learned.

It is easy to forget the boundary term when integrating by parts. That mistake may prescribe a condition on \( u' \)!

Nonlinear coefficient; problem

Problem:

 
$$ \begin{equation*} -(\dfc(u)u')' = f(u),\quad x\in [0,L],\ u(0)=0,\ u'(L)=E \end{equation*} $$

 

  • \( V \): basis \( \sequencei{\baspsi} \) with \( \baspsi_i(0)=0 \) because of \( u(0)=0 \)
  • How does the nonlinear coefficients \( \dfc(u) \) and \( f(u) \) impact the variational formulation?
    (Not much!)

Nonlinear coefficient; variational formulation

Galerkin: multiply by \( v \), integrate, integrate by parts

 
$$ \int_0^L \dfc(u)\frac{du}{dx}\frac{dv}{dx}\dx = \int_0^L f(u)v\dx + [\dfc(u)vu']_0^L\quad\forall v\in V $$

 

  • \( \dfc(u(0))v(0)u'(0)=0 \) since \( v(0) \)
  • \( \dfc(u(L))v(L)u'(L) = \dfc(u(L))v(L)E \) since \( u'(L)=E \)

 
$$ \int_0^L \dfc(u)\frac{du}{dx}\frac{dv}{dx}v\dx = \int_0^L f(u)v\dx + \dfc(u(L))v(L)E\quad\forall v\in V $$

 

or

 
$$ (\dfc(u)u', v') = (f(u),v) + \dfc(u(L))v(L)E\quad\forall v\in V $$

 

Nonlinear coefficient; where does the nonlinearity cause challenges?

  • Abstract notation: no \( a(u,v) \) and \( L(v) \) because \( a \) and \( L \) get nonlinear
  • Abstract notation for nonlinear problems: \( F(u;v)=0\ \forall v\in V \)
  • What about forming a linear system? We get a nonlinear system of algebraic equations
  • Must use methods like Picard iteration or Newton's method to solve nonlinear algebraic equations
  • But: the variational formulation was not much affected by nonlinearities

Examples on detailed computations by hand

Dirichlet and Neumann conditions; problem

 
$$ \begin{equation*} -u''(x)=f(x),\quad x\in \Omega=[0,1],\quad u'(0)=C,\ u(1)=D \end{equation*} $$

 

  • Use a global polynomial basis \( \baspsi_i\sim x^i \) on \( [0,1] \)
  • Because of \( u(1)=D \): \( \baspsi_i(1)=0 \)
  • Basis: \( \baspsi_i(x)=(1-x)^{i+1},\quad i\in\If \)
  • Boundary function: \( B(x)=Dx \)
  • \( u(x) = B(x) + \sum_{j\in\If}c_j\basphi_j = Dx + \sum_{j\in\If} c_j(1-x)^{i+1} \)

Variational formulation: find \( (u-B)\in V \) such that

 
$$ (u,\baspsi_i') = (f,\baspsi_i)-(B',\baspsi_i) - C\baspsi_i(0),\ i\in\If $$

 

Dirichlet and Neumann conditions; linear system

Insert \( u(x) = B(x) + \sum_{j\in\If}c_j\basphi_j \) and derive

 
$$ \sum_{j\in\If} A_{i,j}c_j = b_i,\quad i\in\If$$

 

with

 
$$ A_{i,j} = (\baspsi_j',\baspsi_i') $$

 

 
$$ b_i = (f,\baspsi_i) - (D,\baspsi_i') -C\baspsi_i(0) $$

 

Dirichlet and Neumann conditions; integration

 
$$ A_{i,j} = (\baspsi_j',\baspsi_i') = \int_{0}^1 \baspsi_i'(x)\baspsi_j'(x)dx = \int_0^1 (i+1)(j+1)(1-x)^{i+j} dx $$

 

Choose \( f(x)=2 \):

 
$$ \begin{align*} b_i &= (2,\baspsi_i) - (D,\baspsi_i') -C\baspsi_i(0)\\ &= \int_0^1 \left( 2(1-x)^{i+1} - D(i+1)(1-x)^i\right)dx -C\baspsi_i(0) \end{align*} $$

 

Dirichlet and Neumann conditions; \( 2\times 2 \) system

Can easily do the integrals with sympy. \( N=1 \) and \( \If = \{0,1\} \):

 
$$ \begin{equation*} \left(\begin{array}{cc} 1 & 1\\ 1 & 4/3 \end{array}\right) \left(\begin{array}{c} c_0\\ c_1 \end{array}\right) = \left(\begin{array}{c} -C+D+1\\ 2/3 -C + D \end{array}\right) \end{equation*} $$

 

 
$$ c_0=-C+D+2, \quad c_1=-1,$$

 

 
$$ u(x) = 1 -x^2 + D + C(x-1)\quad\hbox{(exact solution)} $$

 

When is the numerical method is exact?

Assume that apart from boundary conditions, \( \uex \) lies in the same space \( V \) as where we seek \( u \):

 
$$ \begin{align*} u &= B + {\color{red}F},\quad F\in V\\ a(B+F, v) &= L(v),\quad\forall v\in V\\ \uex & = B + {\color{red}E},\quad E\in V\\ a(B+E, v) &= L(v),\quad\forall v\in V \end{align*} $$

 

Subtract: \( a(F-E,v)=0\ \Rightarrow\ E=F \) and \( u = \uex \)

Computing with finite elements

Tasks:

  • Address the model problem \( -u''(x)=2 \), \( u(0)=u(L)=0 \)
  • Uniform finite element mesh with P1 elements
  • Show all finite element computations in detail

Variational formulation

 
$$ -u''(x) = 2,\quad x\in (0,L),\ u(0)=u(L)=0,$$

 

Variational formulation:

 
$$ (u',v') = (2,v)\quad\forall v\in V $$

 

How to deal with the boundary conditions?

Since \( u(0)=0 \) and \( u(L)=0 \), we must force

 
$$ v(0)=v(L)=0,\quad \baspsi_i(0)=\baspsi_i(L)=0$$

 

Now we choose the finite element basis: \( \baspsi_i=\basphi_i \), \( i=0,\ldots,N_n \)

Problem: \( \basphi_0(0)\neq 0 \) and \( \basphi_{N_n}(L)\neq 0 \)

Solution: we just exclude \( \basphi_0 \) and \( \basphi_{N_n} \) from the basis and work with

 
$$ \baspsi_i=\basphi_{i+1},\quad i=0,\ldots,N=N_n-2$$

 

Introduce index mapping \( \nu(i) \): \( \baspsi_i = \basphi_{\nu(i)} \)

 
$$ u = \sum_{j\in\If}c_j\basphi_{\nu(j)},\quad i=0,\ldots,N,\quad \nu(j) = j+1$$

 

Irregular numbering: more complicated \( \nu(j) \) table

Computation in the global physical domain; formulas

 
$$ \begin{equation*} A_{i,j}=\int_0^L\basphi_{i+1}'(x)\basphi_{j+1}'(x) dx,\quad b_i=\int_0^L2\basphi_{i+1}(x) dx \end{equation*} $$

 

Many will prefer to change indices to obtain a \( \basphi_i'\basphi_j' \) product: \( i+1\rightarrow i \), \( j+1\rightarrow j \)

 
$$ \begin{equation*} A_{i-1,j-1}=\int_0^L\basphi_{i}'(x)\basphi_{j}'(x) \dx,\quad b_{i-1}=\int_0^L2\basphi_{i}(x) \dx \end{equation*} $$

 

Computation in the global physical domain; details

 
$$ \basphi_i' \sim \pm h^{-1} $$

 

 
$$ A_{i-1,i-1} = h^{-2}2h = 2h^{-1},\quad A_{i-1,i-2} = h^{-1}(-h^{-1})h = -h^{-1}$$

 
and \( A_{i-1,i}=A_{i-1,i-2} \)

 
$$ b_{i-1} = 2({\half}h + {\half}h) = 2h$$

 

Computation in the global physical domain; linear system

 
$$ \begin{equation*} \frac{1}{h}\left( \begin{array}{ccccccccc} 2 & -1 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ -1 & 2 & -1 & \ddots & & & & & \vdots \\ 0 & -1 & 2 & -1 & \ddots & & & & \vdots \\ \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & 0 & -1 & 2 & -1 & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ \vdots & & & & &\ddots & \ddots &\ddots & -1 \\ 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & -1 & 2 \end{array} \right) \left( \begin{array}{c} c_0 \\ \vdots\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ \vdots\\ c_{N} \end{array} \right) = \left( \begin{array}{c} 2h \\ \vdots\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ \vdots\\ 2h \end{array} \right) \end{equation*} $$

 

Write out the corresponding difference equation

General equation at node \( i \):

 
$$ -\frac{1}{h}c_{i-1} + \frac{2}{h}c_{i} - \frac{1}{h}c_{i+1} = 2h $$

 

Now, \( c_i = u(\xno{i+1})\equiv u_{i+1} \). Writing out the equation at node \( i-1 \),

 
$$ -\frac{1}{h}c_{i-2} + \frac{2}{h}c_{i-1} - \frac{1}{h}c_{i} = 2h $$

 

translates directly to

 
$$ -\frac{1}{h}u_{i-1} + \frac{2}{h}u_{i} - \frac{1}{h}u_{i+1} = 2h $$

 

Comparison with a finite difference discretization

The standard finite difference method for \( -u''=2 \) is

 
$$ -\frac{1}{h^2}u_{i-1} + \frac{2}{h^2}u_{i} - \frac{1}{h^2}u_{i+1} = 2 $$

 

Multiply by \( h \)!

The finite element method and the finite difference method are identical in this example.

(Remains to study the equations at the end points, which involve boundary values - but these are also the same for the two methods)

Cellwise computations; formulas

  • Repeat the previous example, but apply the cellwise algorithm
  • Work with one cell at a time
  • Transform physical cell to reference cell \( X\in [-1,1] \)

 
$$ \begin{equation*} A_{i-1,j-1}^{(e)}=\int_{\Omega^{(e)}} \basphi_i'(x)\basphi_j'(x) \dx = \int_{-1}^1 \frac{d}{dx}\refphi_r(X)\frac{d}{dx}\refphi_s(X) \frac{h}{2} \dX, \end{equation*} $$

 

 
$$ \refphi_0(X)=\half(1-X),\quad\refphi_1(X)=\half(1+X)$$

 

 
$$ \frac{d\refphi_0}{dX} = -\half,\quad \frac{d\refphi_1}{dX} = \half $$

 

From the chain rule

 
$$ \frac{d\refphi_r}{dx} = \frac{d\refphi_r}{dX}\frac{dX}{dx} = \frac{2}{h}\frac{d\refphi_r}{dX}$$

 

Cellwise computations; details

 
$$ \begin{equation*} A_{i-1,j-1}^{(e)}=\int_{\Omega^{(e)}} \basphi_i'(x)\basphi_j'(x) \dx = \int_{-1}^1 \frac{2}{h}\frac{d\refphi_r}{dX}\frac{2}{h}\frac{d\refphi_s}{dX} \frac{h}{2} \dX = \tilde A_{r,s}^{(e)} \end{equation*} $$

 

 
$$ \begin{equation*} b_{i-1}^{(e)} = \int_{\Omega^{(e)}} 2\basphi_i(x) \dx = \int_{-1}^12\refphi_r(X)\frac{h}{2} \dX = \tilde b_{r}^{(e)}, \quad i=q(e,r),\ r=0,1 \end{equation*} $$

 

Must run through all \( r,s=0,1 \) and \( r=0,1 \) and compute each entry in the element matrix and vector:

 
$$ \begin{equation*} \tilde A^{(e)} =\frac{1}{h}\left(\begin{array}{rr} 1 & -1\\ -1 & 1 \end{array}\right),\quad \tilde b^{(e)} = h\left(\begin{array}{c} 1\\ 1 \end{array}\right) \end{equation*} $$

 

Example:

 
$$ \tilde A^{(e)}_{0,1} = \int_{-1}^1 \frac{2}{h}\frac{d\refphi_0}{dX}\frac{2}{h}\frac{d\refphi_1}{dX} \frac{h}{2} \dX = \frac{2}{h}(-\half)\frac{2}{h}\half\frac{h}{2} \int_{-1}^1\dX = -\frac{1}{h} $$

 

Cellwise computations; details of boundary cells

  • The boundary cells involve only one unknown
  • \( \Omega^{(0)} \): left node value known, only a contribution from right node
  • \( \Omega^{(N_e)} \): right node value known, only a contribution from left node

For \( e=0 \) and \( =N_e \):

 
$$ \tilde A^{(e)} =\frac{1}{h}\left(\begin{array}{r} 1 \end{array}\right),\quad \tilde b^{(e)} = h\left(\begin{array}{c} 1 \end{array}\right) $$

 

Only one degree of freedom ("node") in these cells (\( r=0 \) counts the only dof)

Cellwise computations; assembly

4 P1 elements:

vertices = [0, 0.5, 1, 1.5, 2]
cells = [[0, 1], [1, 2], [2, 3], [3, 4]]
dof_map = [[0], [0, 1], [1, 2], [2]]       # only 1 dof in elm 0, 3

Python code for the assembly algorithm:

# Ae[e][r,s]: element matrix, be[e][r]: element vector
# A[i,j]: coefficient matrix, b[i]: right-hand side

for e in range(len(Ae)):
    for r in range(Ae[e].shape[0]):
        for s in range(Ae[e].shape[1]):
            A[dof_map[e,r],dof_map[e,s]] += Ae[e][i,j]
        b[dof_map[e,r]] += be[e][i,j]

Result: same linear system as arose from computations in the physical domain

General construction of a boundary function

  • Now we address nonzero Dirichlet conditions
  • \( B(x) \) is not always easy to construct (i.e., extend to the interior of \( \Omega \)), especially not in 2D and 3D
  • With finite element basis functions, \( \basphi_i \), \( B(x) \) can be constructed in a completely general way (!)

Define

  • \( \Ifb \): set of indices with nodes where \( u \) is known
  • \( U_i \): Dirichlet value of \( u \) at node \( i \), \( i\in\Ifb \)

The general formula for \( B \) is now

 
$$ \begin{equation*} B(x) = \sum_{j\in\Ifb} U_j\basphi_j(x) \end{equation*} $$

 

Explanation

Suppose we have a Dirichlet condition \( u(\xno{k})=U_k \), \( k\in\Ifb \):

 
$$ u(\xno{k}) = \sum_{j\in\Ifb} U_j\underbrace{\basphi_j(x)}_{\neq 0 \hbox{ only for }j=k} + \sum_{j\in\If} c_j\underbrace{\basphi_{\nu(j)}(\xno{k})}_{=0,\ k\not\in\If} = U_k $$

 

Example with two nonzero Dirichlet values; variational formulation

 
$$ -u''=2, \quad u(0)=C,\ u(L)=D $$

 

 
$$ \int_0^L u'v'\dx = \int_0^L2v\dx\quad\forall v\in V$$

 

 
$$ (u',v') = (2,v)\quad\forall v\in V$$

 

Example with two Dirichlet values; boundary function

 
$$ \begin{equation*} B(x) = \sum_{j\in\Ifb} U_j\basphi_j(x) \end{equation*} $$

 

Here \( \Ifb = \{0,N_n\} \), \( U_0=C \), \( U_{N_n}=D \); \( \baspsi_i \) are the internal \( \basphi_i \) functions:

 
$$ \baspsi_i = \basphi_{\nu(i)}, \quad \nu(i)=i+1,\quad i\in\If = \{0,\ldots,N=N_n-2\} $$

 

 
$$ \begin{align*} u(x) &= \underbrace{C\cdot\basphi_0 + D\basphi_{N_n}}_{B(x)} + \sum_{j\in\If} c_j\basphi_{j+1}\\ &= C\cdot\basphi_0 + D\basphi_{N_n} + c_0\basphi_1 + c_1\basphi_2 +\cdots + c_N\basphi_{N_n-1} \end{align*} $$

 

Example with two Dirichlet values; details

Insert \( u = B + \sum_j c_j\baspsi_j \) in variational formulation:

 
$$ (u',v') = (2,v)\quad\Rightarrow\quad (\sum_jc_j\baspsi_j',\baspsi_i') = (2-B',\baspsi_i)\quad \forall v\in V$$

 

 
$$ \begin{align*} A_{i-1,j-1} &= \int_0^L \basphi_i'(x)\basphi_j'(x) \dx\\ b_{i-1} &= \int_0^L (f(x)\basphi_i(x) - B'(x)\basphi_i'(x))\dx,\quad B'(x)=C\basphi_{0}'(x) + D\basphi_{N_n}'(x) \end{align*} $$

 
for \( i,j = 1,\ldots,N+1=N_n-1 \).

New boundary terms from \( -\int B'\basphi_i'\dx \): \( C/2 \) for \( i=1 \) and \( -D/2 \) for \( i=N_n-1 \)

Example with two Dirichlet values; cellwise computations

  • All element matrices are as in the previous example
  • New element vector in the first and last cell

From the last cell:

 
$$ \tilde b_0^{(N_e)} = \int_{-1}^1 \left(f\refphi_0 - D\frac{2}{h} \frac{d\refphi_1}{dX}\frac{2}{h}\frac{d\refphi_0}{dX}\right) \frac{h}{2} \dX = h + \frac{D}{h} $$

 

From the first cell:

 
$$ \tilde b_0^{(0)} = \int_{-1}^1 \left(f\refphi_1 - C\frac{2}{h} \frac{d\refphi_0}{dX}\frac{2}{h}\frac{\refphi_1}{dX}\right) \frac{h}{2} \dX = h + \frac{C}{h} $$

 

(hpl 1: These calculations had some errors - redo them in detail!)

Modification of the linear system; ideas

  • Method 1: incorporate Dirichlet values through a \( B(x) \) function and demand \( \baspsi_i=0 \) where Dirichlet values apply
  • Method 2: drop \( B(x) \), drop demands to \( \baspsi_i \), just assemble as if there were no Dirichlet conditions, and modify the linear system instead

Method 2: always choose \( \baspsi_i = \basphi_i \) for all \( i\in\If \) and set

 
$$ \begin{equation*} u(x) = \sum_{j\in\If}c_j\basphi_j(x),\quad \If=\{0,\ldots,N=N_n\} \end{equation*} $$

 

Attractive way of incorporating Dirichlet conditions.

\( u \) is treated as unknown at all boundaries when computing entries in the linear system

Modification of the linear system; original system

 
$$ -u''=2,\quad u(0)=0,\ u(L)=D$$

 

Assemble as if there were no Dirichlet conditions:

 
$$ \begin{equation*} \frac{1}{h}\left( \begin{array}{ccccccccc} 1 & -1 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ -1 & 2 & -1 & \ddots & & & & & \vdots \\ 0 & -1 & 2 & -1 & \ddots & & & & \vdots \\ \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & 0 & -1 & 2 & -1 & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ \vdots & & & & &\ddots & \ddots &\ddots & -1 \\ 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & -1 & 1 \end{array} \right) \left( \begin{array}{c} c_0 \\ \vdots\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ \vdots\\ c_{N} \end{array} \right) = \left( \begin{array}{c} h \\ 2h\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 2h\\ h \end{array} \right) \end{equation*} $$

 

Modification of the linear system; row replacement

  • Dirichlet condition \( u(\xno{k})= U_k \) means \( c_k=U_k \)
    (since \( c_k=u(\xno{k}) \))
  • Replace first row by \( c_0=0 \)
  • Replace last row by \( c_N=D \)

 
$$ \begin{equation*} \frac{1}{h}\left( \begin{array}{ccccccccc} h & 0 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ -1 & 2 & -1 & \ddots & & & & & \vdots \\ 0 & -1 & 2 & -1 & \ddots & & & & \vdots \\ \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & 0 & -1 & 2 & -1 & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ \vdots & & & & &\ddots & \ddots &\ddots & -1 \\ 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & 0 & h \end{array} \right) \left( \begin{array}{c} c_0 \\ \vdots\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ \vdots\\ c_{N} \end{array} \right) = \left( \begin{array}{c} 0 \\ 2h\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 2h\\ D \end{array} \right) \end{equation*} $$

 

Modification of the linear system; element matrix/vector

In cell 0 we know \( u \) for local node (degree of freedom) \( r=0 \). Replace the first cell equation by \( \tilde c_0 = 0 \):

 
$$ \begin{equation*} \tilde A^{(0)} = A = \frac{1}{h}\left(\begin{array}{rr} h & 0\\ -1 & 1 \end{array}\right),\quad \tilde b^{(0)} = \left(\begin{array}{c} 0\\ h \end{array}\right) \end{equation*} $$

 

In cell \( N_e \) we know \( u \) for local node \( r=1 \). Replace the last equation in the cell system by \( \tilde c_1=D \):

 
$$ \begin{equation*} \tilde A^{(N_e)} = A = \frac{1}{h}\left(\begin{array}{rr} 1 & -1\\ 0 & h \end{array}\right),\quad \tilde b^{(N_e)} = \left(\begin{array}{c} h\\ D \end{array}\right) \end{equation*} $$

 

Symmetric modification of the linear system; algorithm

  • The modification above destroys symmetry of the matrix: e.g., \( A_{0,1}\neq A_{1,0} \)
  • Symmetry is often important in 2D and 3D
    (faster computations, less storage)
  • A more complex modification can preserve symmetry!

Algorithm for incorporating \( c_i=U_i \) in a symmetric way:

  1. Subtract column \( i \) times \( U_i \) from the right-hand side
  2. Zero out column and row no \( i \)
  3. Place 1 on the diagonal
  4. Set \( b_i=U_i \)

Symmetric modification of the linear system; example

 
$$ \begin{equation*} \frac{1}{h}\left( \begin{array}{ccccccccc} h & 0 & 0 &\cdots & \cdots & \cdots & \cdots & \cdots & 0 \\ 0 & 2 & -1 & \ddots & & & & & \vdots \\ 0 & -1 & 2 & -1 & \ddots & & & & \vdots \\ \vdots & \ddots & & \ddots & \ddots & 0 & & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & 0 & -1 & 2 & -1 & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \ddots &\ddots & 0 \\ \vdots & & & & &\ddots & \ddots &\ddots & 0 \\ 0 &\cdots & \cdots &\cdots & \cdots & \cdots & 0 & 0 & h \end{array} \right) \left( \begin{array}{c} c_0 \\ \vdots\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ \vdots\\ c_{N} \end{array} \right) = \left( \begin{array}{c} 0 \\ 2h\\ \vdots\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 2h +\frac{D}{h}\\ D \end{array} \right) \end{equation*} $$

 

Symmetric modification of the linear system; element level

Symmetric modification applied to \( \tilde A^{(N_e)} \):

 
$$ \begin{equation*} \tilde A^{(N_e)} = A = \frac{1}{h}\left(\begin{array}{rr} 1 & 0\\ 0 & h \end{array}\right),\quad \tilde b^{(N-1)} = \left(\begin{array}{c} h + D/h\\ D \end{array}\right) \end{equation*} $$

 

Boundary conditions: specified derivative

Neumann conditions.

How can we incorporate \( u'(0)=C \) with finite elements?

 
$$ -u''=f,\quad u'(0)=C,\ u(L)=D$$

 

  • \( \baspsi_i(L)=0 \) because of Dirichlet condition \( u(L)=D \)
    (or no demand and modify linear system)
  • No demand to \( \baspsi_i(0) \)
  • The condition \( u'(0)=C \) will be handled (as usual) through a boundary term arising from integration by parts

The variational formulation

Galerkin's method:

 
$$ \begin{equation*} \int_0^L(u''(x)+f(x))\baspsi_i(x) dx = 0,\quad i\in\If \end{equation*} $$

 

Integration of \( u''\baspsi_i \) by parts:

 
$$ \begin{equation*} \int_0^Lu'(x)\baspsi_i'(x) \dx -(u'(L)\baspsi_i(L) - u'(0)\baspsi_i(0)) - \int_0^L f(x)\baspsi_i(x) \dx =0 \end{equation*} $$

 

  • \( u'(L){\baspsi_i(L)}=0 \) since \( \baspsi_i(L)=0 \)
  • \( u'(0)\baspsi_i(0) = C\baspsi_i(0) \) since \( u'(0)=C \)

Method 1: Boundary function and exclusion of Dirichlet degrees of freedom

  • \( \baspsi_i = \basphi_i \), \( i\in\If =\{0,\ldots,N=N_n-1\} \)
  • \( B(x)=D\basphi_{N_n}(x) \), \( u= B + \sum_{j=0}^N c_j\basphi_j \)

 
$$ \begin{equation*} \int_0^Lu'(x)\basphi_i'(x) dx = \int_0^L f(x)\basphi_i(x) dx - C\basphi_i(0),\quad i\in\If \end{equation*} $$

 

 
$$ \begin{equation*} \sum_{j=0}^{N}\left( \int_0^L \basphi_i'\basphi_j' dx \right)c_j = \int_0^L\left(f\basphi_i -D\basphi_N'\basphi_i\right) dx - C\basphi_i(0) \end{equation*} $$

 
for \( i=0,\ldots,N=N_n-1 \).

Method 2: Use all \( \basphi_i \) and insert the Dirichlet condition in the linear system

  • Now \( \baspsi_i=\basphi_i \), \( i=0,\ldots,N=N_n \)
  • \( \basphi_N(L)\neq 0 \), so \( u'(L)\basphi_N(L)\neq 0 \)
  • However, the term \( u'(L)\basphi_N(L) \) in \( b_N \) will be erased when we insert the Dirichlet value in \( b_N=D \)

We can therefore forget about the term \( u'(L)\basphi_i(L) \)!

Boundary terms \( u'\basphi_i \) at points \( \xno{i} \) where Dirichlet values apply can always be forgotten.

 
$$ \begin{equation*} u(x) = \sum_{j=0}^{N=N_n} c_j\basphi_j(x) \end{equation*} $$

 

 
$$ \begin{equation*} \sum_{j=0}^{N=N_n}\left( \int_0^L \basphi_i'(x)\basphi_j'(x) dx \right)c_j = \int_0^L f(x)\basphi_i(x) dx - C\basphi_i(0) \end{equation*} $$

 

Assemble entries for \( i=0,\ldots,N=N_n \) and then modify the last equation to \( c_N=D \)

How the Neumann condition impacts the element matrix and vector

The extra term \( C\basphi_0(0) \) affects only the element vector from the first cell since \( \basphi_0=0 \) on all other cells.

 
$$ \begin{equation*} \tilde A^{(0)} = A = \frac{1}{h}\left(\begin{array}{rr} 1 & 1\\ -1 & 1 \end{array}\right),\quad \tilde b^{(0)} = \left(\begin{array}{c} h - C\\ h \end{array}\right) \end{equation*} $$

 

The finite element algorithm

The differential equation problem defines the integrals in the variational formulation.

Request these functions from the user:

integrand_lhs(phi, r, s, x)
boundary_lhs(phi, r, s, x)
integrand_rhs(phi, r, x)
boundary_rhs(phi, r, x)

Must also have a mesh with vertices, cells, and dof_map

Python pseudo code; the element matrix and vector

<Declare global matrix, global rhs: A, b>

# Loop over all cells
for e in range(len(cells)):

    # Compute element matrix and vector
    n = len(dof_map[e])  # no of dofs in this element
    h = vertices[cells[e][1]] - vertices[cells[e][0]]
    <Declare element matrix, element vector: A_e, b_e>

    # Integrate over the reference cell
    points, weights = <numerical integration rule>
    for X, w in zip(points, weights):
        phi = <basis functions + derivatives at X>
        detJ = h/2
        x = <affine mapping from X>
        for r in range(n):
            for s in range(n):
                A_e[r,s] += integrand_lhs(phi, r, s, x)*detJ*w
            b_e[r] += integrand_rhs(phi, r, x)*detJ*w

    # Add boundary terms
    for r in range(n):
        for s in range(n):
            A_e[r,s] += boundary_lhs(phi, r, s, x)*detJ*w
        b_e[r] += boundary_rhs(phi, r, x)*detJ*w

Python pseudo code; boundary conditions and assembly

for e in range(len(cells)):
    ...

    # Incorporate essential boundary conditions
    for r in range(n):
        global_dof = dof_map[e][r]
        if global_dof in essbc_dofs:
            # dof r is subject to an essential condition
            value = essbc_docs[global_dof]
            # Symmetric modification
            b_e -= value*A_e[:,r]
            A_e[r,:] = 0
            A_e[:,r] = 0
            A_e[r,r] = 1
            b_e[r] = value

    # Assemble
    for r in range(n):
        for s in range(n):
            A[dof_map[e][r], dof_map[e][r]] += A_e[r,s]
        b[dof_map[e][r] += b_e[r]

<solve linear system>

Variational formulations in 2D and 3D

How to do integration by parts is the major difference when moving to 2D and 3D.

Integration by parts

Rule for multi-dimensional integration by parts.

 
$$ \begin{equation*} -\int_{\Omega} \nabla\cdot (a(\x)\nabla u) v\dx = \int_{\Omega} a(\x)\nabla u\cdot\nabla v \dx - \int_{\partial\Omega} a\frac{\partial u}{\partial n} v \ds \end{equation*} $$

 

  • \( \int_\Omega ()\dx \): area (2D) or volume (3D) integral
  • \( \int_{\partial\Omega} ()\ds \): line(2D) or surface (3D) integral

  • \( \partial\Omega_N \): Neumann conditions \( -a\frac{\partial u}{\partial n} = g \)
  • \( \partial\Omega_D \): Dirichlet conditions \( u = u_0 \)
  • \( v\in V \) must vanish on \( \partial\Omega_D \) (in method 1)

Example on integration by parts; problem

 
$$ \begin{align*} \v\cdot\nabla u + \alpha u &= \nabla\cdot\left( a\nabla u\right) + f, \quad & \x\in\Omega\\ u &= u_0,\quad &\x\in\partial\Omega_D\\ -a\frac{\partial u}{\partial n} &= g,\quad &\x\in\partial\Omega_N \end{align*} $$

 

  • Known: \( a \), \( \alpha \), \( f \), \( u_0 \), and \( g \).
  • Second-order PDE: must have exactly one boundary condition at each point of the boundary

Method 1 with boundary function and \( \baspsi_i=0 \) on \( \partial\Omega_D \):

 
$$ u(\x) = B(\x) + \sum_{j\in\If} c_j\baspsi_j(\x),\quad B(\x)=u_0(\x) $$

 

Example on integration by parts in 1D/2D/3D

Galerkin's method: multiply by \( v\in V \) and integrate over \( \Omega \),

 
$$ \int_{\Omega} (\v\cdot\nabla u + \alpha u)v\dx = \int_{\Omega} \nabla\cdot\left( a\nabla u\right)v\dx + \int_{\Omega}fv \dx $$

 

Integrate the second-order term by parts:

 
$$ \int_{\Omega} \nabla\cdot\left( a\nabla u\right) v \dx = -\int_{\Omega} a\nabla u\cdot\nabla v\dx + \int_{\partial\Omega} a\frac{\partial u}{\partial n} v\ds, $$

 

Result:

 
$$ \int_{\Omega} (\v\cdot\nabla u + \alpha u)v\dx = -\int_{\Omega} a\nabla u\cdot\nabla v\dx + \int_{\partial\Omega} a\frac{\partial u}{\partial n} v\ds + \int_{\Omega} fv \dx $$

 

Incorporation of the Neumann condition in the variational formulation

Note: \( v\neq 0 \) only on \( \partial\Omega_N \) (since \( v=0 \) on \( \partial\Omega_D \)):

 
$$ \int_{\partial\Omega} a\frac{\partial u}{\partial n} v\ds = \int_{\partial\Omega_N} \underbrace{a\frac{\partial u}{\partial n}}_{-g} v\ds = -\int_{\partial\Omega_N} gv\ds $$

 

The final variational form:

 
$$ \int_{\Omega} (\v\cdot\nabla u + \alpha u)v\dx = -\int_{\Omega} a\nabla u\cdot\nabla v \dx - \int_{\partial\Omega_N} g v\ds + \int_{\Omega} fv \dx $$

 

Or with inner product notation:

 
$$ (\v\cdot\nabla u, v) + (\alpha u,v) = - (a\nabla u,\nabla v) - (g,v)_{N} + (f,v) $$

 

\( (g,v)_{N} \): line or surface integral over \( \partial\Omega_N \).

Derivation of the linear system

  • \( \forall v\in V \) is replaced by for all \( \baspsi_i \), \( i\in\If \)
  • Insert \( u = B + \sum_{j\in\If} c_j\baspsi_j \), \( B = u_0 \), in the variational form
  • Identify \( i,j \) terms (matrix) and \( i \) terms (right-hand side)
  • Write on form \( \sum_{i\in\If}A_{i,j}c_j = b_i \), \( i\in\If \)

 
$$ A_{i,j} = (\v\cdot\nabla \baspsi_j, \baspsi_i) + (\alpha \baspsi_j ,\baspsi_i) + (a\nabla \baspsi_j,\nabla \baspsi_i) $$

 

 
$$ b_i = (g,\baspsi_i)_{N} + (f,\baspsi_i) - (\v\cdot\nabla u_0, \baspsi_i) + (\alpha u_0 ,\baspsi_i) + (a\nabla u_0,\nabla \baspsi_i) $$

 

Transformation to a reference cell in 2D/3D (1)

We want to compute an integral in the physical domain by integrating over the reference cell.

 
$$ \begin{equation*} \int_{{\Omega}^{(e)}} a(\x)\nabla\basphi_i\cdot\nabla\basphi_j\dx \end{equation*} $$

 

Mapping from reference to physical coordinates:

 
$$ \x(\X) $$

 

with Jacobian \( J \),

 
$$ J_{i,j}=\frac{\partial x_j}{\partial X_i} $$

 

Transformation to a reference cell in 2D/3D (2)

  • \( \dx \rightarrow \det J\dX \).
  • Must express \( \nabla\basphi_i \) by an expression with \( \refphi_r \), \( i=q(e,r) \): \( \nabla\refphi_r(\X) \)
  • We want \( \nabla_{\x}\refphi_r(\X) \) (derivatives wrt \( \x \))
  • What we readily have is \( \nabla_{\X}\refphi_r(\X) \) (derivative wrt \( \X \))
  • Need to transform \( \nabla_{\X}\refphi_r(\X) \) to \( \nabla_{\x}\refphi_r(\X) \)

Transformation to a reference cell in 2D/3D (3)

Can derive

 
$$ \begin{align*} \nabla_{\X}\refphi_r &= J\cdot\nabla_{\x}\basphi_i\\ \nabla_{\x}\basphi_i &= \nabla_{\x}\refphi_r(\X) = J^{-1}\cdot\nabla_{\X}\refphi_r(\X) \end{align*} $$

 

Integral transformation from physical to reference coordinates:

 
$$ \begin{equation*} \int_{\Omega^{(e)}} a(\x)\nabla_{\x}\basphi_i\cdot\nabla_{\x}\basphi_j\dx = \int_{\tilde\Omega^r} a(\x(\X))(J^{-1}\cdot\nabla_{\X}\refphi_r)\cdot (J^{-1}\cdot\nabla\refphi_s)\det J\dX \end{equation*} $$

 

Numerical integration

Numerical integration over reference cell triangles and tetrahedra:

 
$$ \int_{\tilde\Omega^r} g\dX = \sum_{j=0}^{n-1} w_j g(\bar\X_j)$$

 

Module numint.py contains different rules:

>>> import numint
>>> x, w = numint.quadrature_for_triangles(num_points=3)
>>> x
[(0.16666666666666666, 0.16666666666666666),
 (0.66666666666666666, 0.16666666666666666),
 (0.16666666666666666, 0.66666666666666666)]
>>> w
[0.16666666666666666, 0.16666666666666666, 0.16666666666666666]

  • Triangle: rules with \( n=1,3,4,7 \) integrate exactly polynomials of degree \( 1,2,3,4 \), resp.
  • Tetrahedron: rules with \( n=1,4,5,11 \) integrate exactly polynomials of degree \( 1,2,3,4 \), resp.

Time-dependent problems

  • So far: used the finite element framework for discretizing in space
  • What about \( u_t = u_{xx} + f \)?
    1. Use finite differences in time to obtain a set of recursive spatial problems
    2. Solve the spatial problems by the finite element method

Example: diffusion problem

 
$$ \begin{align*} \frac{\partial u}{\partial t} &= \dfc\nabla^2 u + f(\x, t),\quad &\x\in\Omega, t\in (0,T]\\ u(\x, 0) & = I(\x),\quad &\x\in\Omega\\ \frac{\partial u}{\partial n} &= 0,\quad &\x\in\partial\Omega,\ t\in (0,T] \end{align*} $$

 

A Forward Euler scheme; ideas

 
$$ \begin{equation*} [D_t^+ u = \dfc\nabla^2 u + f]^n,\quad n=1,2,\ldots,N_t-1 \end{equation*} $$

 

Solving wrt \( u^{n+1} \):

 
$$ \begin{equation*} u^{n+1} = u^n + \Delta t \left( \dfc\nabla^2 u^n + f(\x, t_n)\right) \end{equation*} $$

 

  • \( u^n = \sum_jc_j^n\baspsi_j\, \in V \), \( u^{n+1} = \sum_jc_j^{n+1}\baspsi_j\,\in V \)
  • Compute \( u^0 \) from \( I \)
  • Compute \( u^{n+1} \) from \( u^n \) by solving the PDE for \( u^{n+1} \) at each time level

A Forward Euler scheme; stages in the discretization

  • \( \uex(\x,t) \): exact solution of the PDE problem
  • \( \uex^n(\x) \): exact solution of time-discrete problem (after applying a finite difference scheme in time)
  • \( \uex^n(\x)\approx u^n = \sum_{j\in\If}c_j^n\baspsi_j = \) solution of the time- and space-discrete problem (after applying a Galerkin method in space)

 
$$ \begin{equation*} \frac{\partial \uex}{\partial t} = \dfc\nabla^2 \uex + f(\x, t) \end{equation*} $$

 

 
$$ \begin{equation*} \uex^{n+1} = \uex^n + \Delta t \left( \dfc\nabla^2 \uex^n + f(\x, t_n)\right) \end{equation*} $$

 

 
$$ \uex^n \approx u^n = \sum_{j=0}^{N} c_j^{n}\baspsi_j(\x),\quad \uex^{n+1} \approx u^{n+1} = \sum_{j=0}^{N} c_j^{n+1}\baspsi_j(\x) $$

 

 
$$ R = u^{n+1} - u^n - \Delta t \left( \dfc\nabla^2 u^n + f(\x, t_n)\right)$$

 

A Forward Euler scheme; weighted residual (or Galerkin) principle

 
$$ R = u^{n+1} - u^n - \Delta t \left( \dfc\nabla^2 u^n + f(\x, t_n)\right)$$

 

The weighted residual principle:

 
$$ \int_\Omega Rw\dx = 0,\quad \forall w\in W$$

 

results in

 
$$ \int_\Omega \left\lbrack u^{n+1} - u^n - \Delta t \left( \dfc\nabla^2 u^n + f(\x, t_n)\right) \right\rbrack w \dx =0, \quad \forall w \in W $$

 

Galerkin: \( W=V \), \( w=v \)

A Forward Euler scheme; integration by parts

Isolating the unknown \( u^{n+1} \) on the left-hand side:

 
$$ \int_{\Omega} u^{n+1}\baspsi_i\dx = \int_{\Omega} \left\lbrack u^n - \Delta t \left( \dfc\nabla^2 u^n + f(\x, t_n)\right) \right\rbrack v\dx $$

 

Integration by parts of \( \int\dfc(\nabla^2 u^n) v\dx \):

 
$$ \int_{\Omega}\dfc(\nabla^2 u^n)v \dx = -\int_{\Omega}\dfc\nabla u^n\cdot\nabla v\dx + \underbrace{\int_{\partial\Omega}\dfc\frac{\partial u^n}{\partial n}v \dx}_{=0\quad\Leftarrow\quad\partial u^n/\partial n=0} $$

 

Variational form:

 
$$ \begin{equation*} \int_{\Omega} u^{n+1} v\dx = \int_{\Omega} u^n v\dx - \Delta t \int_{\Omega}\dfc\nabla u^n\cdot\nabla v\dx + \Delta t\int_{\Omega}f^n v\dx,\quad\forall v\in V \end{equation*} $$

 

New notation for the solution at the most recent time levels

  • \( u \) and u: the spatial unknown function to be computed
  • \( u_1 \) and u_1: the spatial function at the previous time level \( t-\Delta t \)
  • \( u_2 \) and u_2: the spatial function at \( t-2\Delta t \)
  • This new notation gives close correspondence between code and math

 
$$ \begin{equation*} \int_{\Omega} u v\dx = \int_{\Omega} u_1 v\dx - \Delta t \int_{\Omega}\dfc\nabla u_1\cdot\nabla v\dx + \Delta t\int_{\Omega}f^n v\dx \end{equation*} $$

 

or shorter

 
$$ \begin{equation*} (u,v) = (u_1,v) - \Delta t (\dfc\nabla u_1,\nabla v) + (f^n, v) \end{equation*} $$

 

Deriving the linear systems

  • \( u = \sum_{j=0}^{N}c_j\baspsi_j(\x) \)
  • \( u_1 = \sum_{j=0}^{N} c_{1,j}\baspsi_j(\x) \)
  • \( \forall v\in V \): for \( v=\baspsi_i \), \( i=0,\ldots,N \)

Insert these in

 
$$ (u, \baspsi_i) = (u_1,\baspsi_i) - \Delta t (\dfc\nabla u_1,\nabla\baspsi_i) + (f^n,\baspsi_i) $$

 

and order terms as matrix-vector products (\( \quad i=0,\ldots,N \)):

 
$$ \begin{equation*} \sum_{j=0}^{N} \underbrace{(\baspsi_i,\baspsi_j)}_{M_{i,j}} c_j = \sum_{j=0}^{N} \underbrace{(\baspsi_i,\baspsi_j)}_{M_{i,j}} c_{1,j} -\Delta t \sum_{j=0}^{N} \underbrace{(\nabla\baspsi_i,\dfc\nabla\baspsi_j)}_{K_{i,j}} c_{1,j} + (f^n,\baspsi_i) \end{equation*} $$

 

Structure of the linear systems

 
$$ \begin{equation*} Mc = Mc_1 - \Delta t Kc_1 + f \end{equation*} $$

 

 
$$ \begin{align*} M &= \{M_{i,j}\},\quad M_{i,j}=(\baspsi_i,\baspsi_j),\quad i,j\in\If\\ K &= \{K_{i,j}\},\quad K_{i,j}=(\nabla\baspsi_i,\dfc\nabla\baspsi_j),\quad i,j\in\If\\ f &= \{(f(\x,t_n),\baspsi_i)\}_{i\in\If}\\ c &= \{c_i\}_{i\in\If}\\ c_1 &= \{c_{1,i}\}_{i\in\If} \end{align*} $$

 

Computational algorithm

  1. Compute \( M \) and \( K \).
  2. Initialize \( u^0 \) by either interpolation or projection
  3. For \( n=1,2,\ldots,N_t \):
    1. compute \( b = Mc_1 - \Delta t Kc_1 + f \)
    2. solve \( Mc = b \)
    3. set \( c_1 = c \)

Initial condition:

  • Either interpolation: \( c_{1,j} = I(\x_j) \) (finite elements)
  • Or projection: solve \( \sum_j M_{i,j}c_{1,j} = (I,\baspsi_i) \), \( i\in\If \)

Comparing P1 elements with the finite difference method; ideas

  • P1 elements in 1D
  • Uniform mesh on \( [0,L] \) with cell length \( h \)
  • No Dirichlet conditions: \( \baspsi_i=\basphi_i \), \( i=0,\ldots,N=N_n \)
  • Have found formulas for \( M \) and \( K \) at the element level
  • Have assembled the global matrices
  • Have developed corresponding finite difference operator formulas
  • \( M \): \( h[D_t^+(u + \frac{1}{6}h^2D_xD_x u)]^n_i \)
  • \( K \): \( h[\dfc D_xD_x u]^n_i \)

Comparing P1 elements with the finite difference method; results

Diffusion equation with finite elements is equivalent to

 
$$ \begin{equation*} [D_t^+(u + \frac{1}{6}h^2D_xD_x u) = \dfc D_xD_x u + f]^n_i \end{equation*} $$

 

Can lump the mass matrix by Trapezoidal integration and get the standard finite difference scheme

 
$$ \begin{equation*} [D_t^+u = \dfc D_xD_x u + f]^n_i \end{equation*} $$

 

Discretization in time by a Backward Euler scheme

Backward Euler scheme in time:

 
$$ [D_t^- u = \dfc\nabla^2 u + f(\x, t)]^n $$

 

 
$$ \begin{equation*} \uex^{n} - \Delta t \left( \dfc\nabla^2 \uex^n + f(\x, t_{n})\right) = \uex^{n-1} \end{equation*} $$

 

 
$$ \uex^n \approx u^n = \sum_{j=0}^{N} c_j^{n}\baspsi_j(\x),\quad \uex^{n+1} \approx u^{n+1} = \sum_{j=0}^{N} c_j^{n+1}\baspsi_j(\x)$$

 

The variational form of the time-discrete problem

 
$$ \begin{equation*} \int_{\Omega} \left( u^{n}v + \Delta t \dfc\nabla u^n\cdot\nabla v\right)\dx = \int_{\Omega} u^{n-1} v\dx - \Delta t\int_{\Omega}f^n v\dx,\quad\forall v\in V \end{equation*} $$

 

or

 
$$ \begin{equation*} (u,v) + \Delta t (\dfc\nabla u,\nabla v) = (u_1,v) + \Delta t (f^n,\baspsi_i) \end{equation*} $$

 

The linear system: insert \( u=\sum_j c_j\baspsi_i \) and \( u_1=\sum_j c_{1,j}\baspsi_i \),

 
$$ \begin{equation*} (M + \Delta t \dfc K)c = Mc_1 + f \end{equation*} $$

 

Calculations with P1 elements in 1D

Can interpret the resulting equation system as

 
$$ \begin{equation*} [D_t^-(u + \frac{1}{6}h^2D_xD_x u) = \dfc D_xD_x u + f]^n_i \end{equation*} $$

 

Lumped mass matrix (by Trapezoidal integration) gives a standard finite difference method:

 
$$ \begin{equation*} [D_t^- u = \dfc D_xD_x u + f]^n_i \end{equation*} $$

 

Dirichlet boundary conditions

Dirichlet condition at \( x=0 \) and Neumann condition at \( x=L \):

 
$$ \begin{align*} u(\x,t) &= u_0(\x,t),\quad & \x\in\partial\Omega_D\\ -\dfc\frac{\partial}{\partial n} u(\x,t) &= g(\x,t),\quad & \x\in\partial{\Omega}_N \end{align*} $$

 

Forward Euler in time, Galerkin's method, and integration by parts:

 
$$ \begin{equation*} \int_\Omega u^{n+1}v\dx = \int_\Omega (u^n - \Delta t\dfc\nabla u^n\cdot\nabla v)\dx - \Delta t\int_{\partial\Omega_N} gv\ds,\quad \forall v\in V \end{equation*} $$

 

Requirement: \( v=0 \) on \( \partial\Omega_D \)

Boundary function

 
$$ u^n(\x) = u_0(\x,t_n) + \sum_{j\in\If}c_j^n\baspsi_j(\x)$$

 

 
$$ \begin{align*} \sum_{j\in\If} & \left(\int_\Omega \baspsi_i\baspsi_j\dx\right) c^{n+1}_j = \sum_{j\in\If} \left(\int_\Omega\left( \baspsi_i\baspsi_j - \Delta t\dfc\nabla \baspsi_i\cdot\nabla\baspsi_j\right)\dx\right) c_j^n - \\ &\quad \int_\Omega\left( u_0(\x,t_{n+1}) - u_0(\x,t_n) + \Delta t\dfc\nabla u_0(\x,t_n)\cdot\nabla \baspsi_i\right)\dx \\ & \quad + \Delta t\int_\Omega f\baspsi_i\dx - \Delta t\int_{\partial\Omega_N} g\baspsi_i\ds, \quad i\in\If \end{align*} $$

 

Finite element basis functions

  • \( B(\x,t_n)=\sum_{j\in\Ifb} U_j^n\basphi_j \)
  • \( \baspsi_i = \basphi_{\nu(j)} \), \( j\in\If \)
  • \( \nu(j) \), \( j\in\If \), are the node numbers corresponding to all nodes without a Dirichlet condition

 
$$ \begin{align*} u^n &= \sum_{j\in\Ifb} U_j^n\basphi_j + \sum_{j\in\If}c_{1,j}\basphi_{\nu(j)},\\ u^{n+1} &= \sum_{j\in\Ifb} U_j^{n+1}\basphi_j + \sum_{j\in\If}c_{j}\basphi_{\nu(j)} \end{align*} $$

 

 
$$ \begin{align*} \sum_{j\in\If} & \left(\int_\Omega \basphi_i\basphi_j\dx\right) c_j = \sum_{j\in\If} \left(\int_\Omega\left( \basphi_i\basphi_j - \Delta t\dfc\nabla \basphi_i\cdot\nabla\basphi_j\right)\dx\right) c_{1,j} - \\ &\quad \sum_{j\in\Ifb}\int_\Omega\left( \basphi_i\basphi_j(U_j^{n+1} - U_j^n) + \Delta t\dfc\nabla \basphi_i\cdot\nabla \basphi_jU_j^n\right)\dx \\ &\quad + \Delta t\int_\Omega f\basphi_i\dx - \Delta t\int_{\partial\Omega_N} g\basphi_i\ds, \quad i\in\If \end{align*} $$

 

Modification of the linear system; the raw system

  • Drop boundary function
  • Compute as if there are not Dirichlet conditions
  • Modify the linear system to incorporate Dirichlet conditions
  • \( \If \) holds the indices of all nodes \( \{0,1,\ldots,N=N_n\} \)

 
$$ \begin{align*} \sum_{j\in\If} \biggl(\underbrace{\int_\Omega \basphi_i\basphi_j\dx}_{M_{i,j}}\biggr) c_j &= \sum_{j\in\If} \biggl(\underbrace{\int_\Omega \basphi_i\basphi_j \dx}_{M_{i,j}} - \Delta t\underbrace{\int_\Omega \dfc\nabla \basphi_i\cdot\nabla\basphi_j\dx}_{K_{i,j}}\biggr) c_{1,j} \\ &\quad \underbrace{-\Delta t\int_\Omega f\basphi_i\dx - \Delta t\int_{\partial\Omega_N} g\basphi_i\ds}_{f_i},\quad i\in\If \end{align*} $$

 

Modification of the linear system; setting Dirichlet conditions

 
$$ \begin{equation*} Mc = b,\quad b = Mc_1 - \Delta t Kc_1 + \Delta t f \end{equation*} $$

 

For each \( k \) where a Dirichlet condition applies, \( u(\xno{k},t_{n+1})=U_k^{n+1} \),

  • set row \( k \) in \( M \) to zero and 1 on the diagonal: \( M_{k,j}=0 \), \( j\in\If \), \( M_{k,k}=1 \)
  • \( b_k = U_k^{n+1} \)

Or apply the slightly more complicated modification which preserves symmetry of \( M \)

Modification of the linear system; Backward Euler example

Backward Euler discretization in time gives a more complicated coefficient matrix:

 
$$ \begin{equation*} Ac=b,\quad A = M + \Delta t K,\quad b = Mc_1 + \Delta t f \end{equation*} $$

 

  • Set row \( k \) to zero and 1 on the diagonal: \( M_{k,j}=0 \), \( j\in\If \), \( M_{k,k}=1 \)
  • Set row \( k \) to zero: \( K_{k,j}=0 \), \( j\in\If \)
  • \( b_k = U_k^{n+1} \)

Observe: \( A_{k,k} = M_{k,k} + \Delta t K_{k,k} = 1 + 0 \), so \( c_k = U_k^{n+1} \)

Analysis of the discrete equations

The diffusion equation \( u_t = \dfc u_{xx} \) allows a (Fourier) wave component

 
$$ \begin{equation*} u = \Aex^n e^{ikx},\quad \Aex = e^{-\dfc k^2\Delta t} \end{equation*} $$

 

Numerical schemes often allow the similar solution

 
$$ \begin{equation*} u^n_q = A^n e^{ikx} \end{equation*} $$

 

  • \( A \): amplification factor to be computed
  • How good is this \( A \) compared to the exact one?

Handy formulas

 
$$ \begin{align*} [D_t^+ A^n e^{ikq\Delta x}]^n &= A^n e^{ikq\Delta x}\frac{A-1}{\Delta t},\\ [D_t^- A^n e^{ikq\Delta x}]^n &= A^n e^{ikq\Delta x}\frac{1-A^{-1}}{\Delta t},\\ [D_t A^n e^{ikq\Delta x}]^{n+\half} &= A^{n+\half} e^{ikq\Delta x}\frac{A^{\half}-A^{-\half}}{\Delta t} = A^ne^{ikq\Delta x}\frac{A-1}{\Delta t},\\ [D_xD_x A^ne^{ikq\Delta x}]_q &= -A^n \frac{4}{\Delta x^2}\sin^2\left(\frac{k\Delta x}{2}\right) \end{align*} $$

 

Amplification factor for the Forward Euler method; results

Introduce \( p=k\Delta x/2 \) and \( C=\dfc\Delta t/\Delta x^2 \):

 
$$ A = 1 - 4C\frac{\sin^2 p}{1 - \underbrace{\frac{2}{3}\sin^2 p}_{\hbox{from }M}}$$

 

(See notes for details)

Stability: \( |A|\leq 1 \):

 
$$ \begin{equation*} C\leq \frac{1}{6}\quad\Rightarrow\quad \Delta t\leq \frac{\Delta x^2}{6\dfc} \end{equation*} $$

 

Finite differences: \( C\leq {\half} \), so finite elements give a stricter stability criterion for this PDE!

Amplification factor for the Backward Euler method; results

Coarse meshes:

 
$$ A = \left( 1 + 4C\frac{\sin^2 p}{1 + \frac{2}{3}\sin^2 p}\right)^{-1} \hbox{ (unconditionally stable)} $$

 

Amplification factors for smaller time steps; Forward Euler

Amplification factors for smaller time steps; Backward Euler