Look up the topic of vector space in your favorite linear algebra book or search for the term at Wikipedia. Prove that vectors in the plane \( (a,b) \) form a vector space by showing that all the axioms of a vector space are satisfied. Similarly, prove that all linear functions of the form \( ax+b \) constitute a vector space, \( a,b\in\Real \).
On the contrary,
show that all quadratic functions of the form \( 1 + ax^2 + bx \) do not
constitute a vector space.
Filename: linalg1.pdf
.
As an extension of Exercise 1: Linear algebra refresher I, check out
the topic of inner product spaces. Suggest a possible inner product
for the space of all linear functions of the form \( ax+b \), \( a,b\in\Real \).
Show that this inner product satisfies the
general requirements of an inner product in a vector space.
Filename: linalg2.pdf
.
Given \( \f = (1,1,1) \) in \( \Real^3 \), find the best approximation vector
\( \u \) in the plane spanned by the unit vectors \( (1,0) \) and \( (0,1) \).
Repeat the calculations using the vectors \( (2,1) \) and \( (1,2) \).
Filename: vec111_approx.pdf
.
Let \( V \) be a function space with basis functions \( x^i \), \( i=0,1,\ldots,N \). Find the best approximation to \( f(x)=\exp(-x) \) on \( \Omega =[0,8] \) among all functions in \( V \) for \( N=2,4,6 \). Illustrate the three approximations in three separate plots.
Hint.
The exercise is easy to solve if you apply the lest_squares
and
comparison_plot
functions in the approx1D.py
module.
Filename: exp_powers.py
.
In this exercise we want to approximate the sine function by polynomials of order \( N+1 \). Consider two bases: $$ \begin{align*} V_1 &= \{x, x^3, x^5, \ldots, x^{N-2}, x^N \}, \\ V_2 &= \{1,x,x^2,x^3,\ldots, x^N\}\tp \end{align*} $$ The basis \( V_1 \) is motivated by the fact that the Taylor polynomial approximation to the sine function has only odd powers, while \( V_2 \) is motivated by the assumption that also the even powers could improve the approximation in a least-squares setting.
Compute the best approximation to \( f(x)=\sin(x) \) among all functions in \( V_1 \) and \( V_2 \) on two domains of increasing sizes: \( \Omega_{1,k} = [0, k\pi] \), \( k=2,3\ldots,6 \) and \( \Omega_{2,k} = [-k\pi /2, k\pi/2] \), \( k=2,3\ldots,6 \). Make plots for all combinations of \( V_1 \), \( V_2 \), \( \Omega_1 \), \( \Omega_2 \), \( k=2,3,\ldots,6 \).
Add a plot of the \( N \)-th degree Taylor polynomial approximation of \( \sin(x) \) around \( x=0 \).
Hint.
You can make a loop over \( V_1 \) and \( V_2 \), a loop over
\( \Omega_1 \) and \( \Omega_2 \), and a loop over \( k \). Inside the loops,
call the functions least_squares
and
comparison_plot
from the approx1D
module.
\( N=9 \) is a suggested value.
Filename: sin_powers.py
.
Find the best approximation of \( f(x) = \tanh (s(x-\pi)) \) on \( [0, 2\pi] \) in the space \( V \) with basis \( \baspsi_i(x) = \sin((2i+1)x) \), \( i\in\If = \{0,\ldots,N\} \). Make a movie showing how \( u=\sum_{j\in\If}c_j\baspsi_j(x) \) approximates \( f(x) \) as \( N \) grows. Choose \( s \) such that \( f \) is steep (\( s=20 \) may be appropriate).
Hint.
One may naively call the least_squares_orth
and comparison_plot
from the approx1D
module in a loop and extend the basis with
one new element in each pass. This approach
implies a lot of recomputations.
A more efficient strategy is to let least_squares_orth
compute with only one basis function at a time and accumulate
the corresponding u
in the total solution.
Filename: tanh_sines_approx.py
.
Approximation of a discontinuous (or steep) \( f(x) \) by sines, results in slow convergence and oscillatory behavior of the approximation close to the abrupt changes in \( f \). This is known as the Gibb's phenomenon.
Make a movie that shows how the approximation
in Exercise 6: Approximate a steep function by sines is improved as \( N \) grows.
Illustrate a smooth case where \( s=0.5 \) and a steep case where \( s=20 \)
in the \( \tanh (s(x-\pi)) \) function.
Filename: tanh_sines_approx_movie.py
.
Given a function \( f(x) \) on an interval \( [0,L] \), look up the formula for the coefficients \( a_j \) and \( b_j \) in the Fourier series of \( f \): $$ \begin{equation*} f(x) = a_0 + \sum_{j=1}^\infty a_j\cos \left(j\frac{\pi x}{L}\right) + \sum_{j=1}^\infty b_j\sin \left(j\frac{\pi x}{L}\right)\tp \end{equation*} $$
Let an infinite-dimensional vector space \( V \) have the basis functions \( \cos j\frac{\pi x}{L} \) for \( j=0,1,\dots,\infty \) and \( \sin j\frac{\pi x}{L} \) for \( j=1,\dots,\infty \). Show that the least squares approximation method from the section Approximation of functions leads to a linear system whose solution coincides with the standard formulas for the coefficients in a Fourier series of \( f(x) \) (see also the section Fourier series). You may choose $$ \begin{equation*} \baspsi_{2i} = \cos\left( i\frac{\pi}{L}x\right),\quad \baspsi_{2i+1} = \sin\left( i\frac{\pi}{L}x\right),\end{equation*} $$ for \( i=0,1,\ldots,N\rightarrow\infty \).
Choose \( f(x) = \tanh(s(x-\half)) \) on \( \Omega=[0,1] \), which is
a smooth function, but with considerable steepness around \( x=1/2 \)
as \( s \) grows in size.
Calculate the coefficients in the Fourier expansion by
solving the linear system, arising from the least squares or Galerkin
methods, by hand. Plot
some truncated versions of the series together with \( f(x) \) to show how
the series expansion converges for \( s=10 \) and \( s=100 \).
Filename: Fourier_approx.py
.
Use interpolation/collocation with uniformly distributed
points and Chebychev nodes to approximate
$$
\begin{equation*} f(x) = -\tanh(s(x-\half)),\quad x\in [0,1],\end{equation*}
$$
by Lagrange polynomials for \( s=10 \) and \( s=100 \), and \( N=3,6,9,11 \).
Make separate plots of the approximation for each combination of
\( s \), point type (Chebyshev or uniform), and \( N \).
Filename: tanh_Lagrange.py
.
Consider a domain \( \Omega =[0,2] \) divided into the three P2 elements \( [0,1] \), \( [1,1.2] \), and \( [1.2,2] \).
For P1 and P2 elements, set up the list of coordinates and nodes
(nodes
) and the numbers of the nodes that belong to each element
(elements
) in two cases: 1) nodes and elements numbered from left to
right, and 2) nodes and elements numbered from right to left.
Filename: fe_numberings1.py.
.
Repeat Exercise 10: Define nodes and elements, but define the
data structures vertices
, cells
, and dof_map
instead of
nodes
and elements
.
Filename: fe_numberings2.py
.
Exercise 10: Define nodes and elements describes a element mesh with a total of five elements, but with two different element and node orderings. For each of the two orderings, make a \( 5\times 5 \) matrix and fill in the entries that will be nonzero.
Hint. A matrix entry \( (i,j) \) is nonzero if \( i \) and \( j \) are nodes in the same element.
Filename: fe_sparsity_pattern.pdf
.
Perform symbolic calculations to find formulas for the coefficient matrix
and right-hand side
when approximating \( f(x) = \sin (x) \) on
\( \Omega=[0, \pi] \) by two P1 elements of size \( \pi/2 \).
Solve the system and compare \( u(\pi/2) \) with
the exact value 1.
Filename: sin_approx_P1.py
.
Given
$$
\begin{equation*} f(x) = \tanh(s(x-\half))\end{equation*}
$$
use the Galerkin or least squares method with finite elements to find
an approximate function \( u(x) \). Choose \( s=40 \) and try
\( N_e=4,8,16 \) P1 elements and
\( N_e=2,4,8 \) P2 elements.
Integrate \( f\basphi_i \) numerically.
Filename: tanh_fe_P1P2_approx.py
.
Solve Exercise 14: Approximate a steep function by P1 and P2 elements using \( N_e=1,2,4 \) P3 and P4
elements. How will a collocation/interpolation method work in
this case with the same number of nodes?
Filename: tanh_fe_P3P4_approx.py
.
The theory (40) from the section Computing the error of the approximation predicts that the error in the Pd approximation of a function should behave as \( h^{d+1} \), where \( h \) is the length of the element. Use experiments to verify this asymptotic behavior (i.e., for small enough \( h \)). Choose three examples: \( f(x)=Ae^{-\omega x} \) on \( [0,3/\omega] \), \( f(x) = A\sin (\omega x) \) on \( \Omega=[0, 2\pi/\omega] \) for constant \( A \) and \( \omega \), and \( f(x)=\sqrt{x} \) on \( [0,1] \).
Hint. Run a series of experiments: \( (h_i,E_i) \), \( i=0,\ldots,m \), where \( E_i \) is the \( L^2 \) norm of the error corresponding to element length \( h_i \). Assume an error model \( E=Ch^r \) and compute \( r \) from two successive experiments: $$ r_i = \ln (E_{i+1}/E_i)/\ln (h_{i+1}/h_i),\quad i=0,\ldots,m-1\tp$$ Hopefully, the sequence \( r_0,\ldots,r_{m-1} \) converges to the true \( r \), and \( r_{m-1} \) can be taken as an approximation to \( r \). Run such experiments for different \( d \) for the different \( f(x) \) functions.
Filename: Pd_approx_error.py
.
Approximate the step function $$ \begin{equation*} f(x) = \left\lbrace\begin{array}{ll} 1 & 0\leq x < \halfi,\\ 2 & \halfi \leq x \geq \halfi \end{array}\right. \end{equation*} $$ by 2, 4, and 8 P1 and P2 elements. Compare approximations visually.
Hint. This \( f \) can also be expressed in terms of the Heaviside function \( H(x) \): \( f(x) = H(x-\halfi) \). Therefore, \( f \) can be defined by
f = sp.Heaviside(x - sp.Rational(1,2))
making the approximate
function in the
fe_approx1D.py
module an obvious candidate to solve the
problem. However, sympy
does not handle symbolic integration
with this particular integrand, and the approximate
function faces a problem
when converting f
to a Python function (for plotting) since
Heaviside
is not an available function in numpy
. It is better to make
special-purpose code for this case or perform all
calculations by hand.
Filename: Heaviside_approx_P1P2.py
.
Assume we have basis functions \( \basphi_i(x,y) \) in 2D that are
orthogonal
such that \( (\basphi_i,\basphi_j)=0 \) when \( i\neq j \).
The function least_squares
in the
file approx2D.py will then spend much time on computing off-diagonal terms
in the coefficient matrix that we know are zero.
To speed up the computations, make a
version least_squares_orth
that utilizes the orthogonality among the
basis functions. Apply the function to approximate
$$ f(x,y) = x(1-x)y(1-y)e^{-x-y}$$
on \( \Omega = [0,1]\times [0,1] \) via basis functions
$$ \basphi_i(x,y) = \sin ((p+1)\pi x)\sin((q+1)\pi y),\quad i=q(N_x+1) + p,
$$
where \( p=0,\ldots,N_x \) and \( q=0,\ldots,N_y \).
Hint.
Get ideas from the function least_squares_orth
in
the section Orthogonal basis functions and
file approx1D.py.
Filename: approx2D_least_squares_orth.py
.
Consider approximation of some \( f(x) \) on an interval \( \Omega \) using
the least squares or Galerkin methods with P1 elements. Derive
the element matrix and vector using the
Trapezoidal rule (41) for calculating
integrals on the reference element. Assemble the contributions, assuming
a uniform cell partitioning, and show that the resulting linear system
has the form \( c_i=f(\xno{i}) \) for \( i\in\If \).
Filename: fe_P1_trapez.pdf
.
We shall approximate the function $$ f(x) = 1 + \epsilon\sin (2\pi nx),\quad x\in \Omega = [0,1],$$ where \( n\in\Integer \) and \( \epsilon \geq 0 \).
a) Plot \( f(x) \) for \( n=1,2,3 \) and find the wave length of the function.
b) We want to use \( N_P \) elements per wave length. Show that the number of elements is then \( nN_P \).
c) The critical quantity for accuracy is the number of elements per wave length, not the element size in itself. It therefore suffices to study an \( f \) with just one wave length in \( \Omega = [0,1] \). Set \( \epsilon = 0.5 \).
Run the least squares or projection/Galerkin method for \( N_P=2,4,8,16,32 \). Compute the error \( E=||u-f||_{L^2} \).
Hint.
Use the fe_approx1D_numint
module to compute \( u \) and use
the technique from the section Computing the error of the approximation to
compute the norm of the error.
d) Repeat the set of experiments in the above point, but use interpolation/collocation based on the node points to compute \( u(x) \) (recall that \( c_i \) is now simply \( f(\xno{i}) \)). Compute the error \( E=||u-f||_{L^2} \). Which method seems to be most accurate?
Filename: P1_vs_interp.py
.
Extend the approx2D.py code to 3D
applying ideas from the section Extension to 3D.
Use a 3D generalization of the test problem in the section Implementation to test the implementation.
Filename: approx3D.py
.
Redo Exercise 19: Use the Trapezoidal rule and P1 elements, but use P2
elements and Simpson's rule based on sampling the integrands at
the nodes in the reference cell.
Filename: fe_P2_simpson.pdf
.