Look up the topic of *vector space* in your favorite linear algebra
book or search for the term at Wikipedia.
Prove that vectors in the plane \((a,b)\) form a vector space
by showing that all the axioms of a vector space
are satisfied. Similarly,
prove that all linear functions of the form \(ax+b\) constitute a vector space,
\(a,b\in\mathbb{R}\).

On the contrary,
show that all quadratic functions of the form \(1 + ax^2 + bx\) *do not*
constitute a vector space.
Filename: `linalg1.pdf`.

As an extension of *Exercise 1: Linear algebra refresher I*, check out
the topic of *inner product spaces*. Suggest a possible inner product
for the space of all linear functions of the form \(ax+b\), \(a,b\in\mathbb{R}\).
Show that this inner product satisfies the
general requirements of an inner product in a vector space.
Filename: `linalg2.pdf`.

Given \(\boldsymbol{f} = (1,1,1)\) in \(\mathbb{R}^3\), find the best approximation vector
\(\boldsymbol{u}\) in the plane spanned by the unit vectors \((1,0)\) and \((0,1)\).
Repeat the calculations using the vectors \((2,1)\) and \((1,2)\).
Filename: `vec111_approx.pdf`.

Let \(V\) be a function space with basis functions \(x^i\),
\(i=0,1,\ldots,N\). Find the best approximation to \(f(x)=\exp(-x)\) on
\(\Omega =[0,4]\) among all functions in \(V\) for \(N=2,4,6\). Illustrate
the three approximations in three separate plots. Add the
corresponding Taylor polynomial approximation of degree \(N\) in each
plot.
Filename: `exp_powers.py`.

Let \(V\) be a function space with basis functions \(x^{2i+1}\), \(i=0,1,\ldots,N\). Find the best approximation to \(f(x)=\sin(x)\) among all functions in \(V\), using \(N=8\) for a domain that includes more and more half-periods of the sine function: \(\Omega = [0, k\pi/2]\), \(k=2,3,\ldots,12\). How does a Taylor series of \(\sin(x)\) around \(x\) up to degree 9 behave for the largest domain?

**Hint.**
One can make a loop over \(k\) and call the functions `least_squares` and
`comparison_plot` from the `approx1D` module.

Filename: `sin_powers.py`.

Find the best approximation of \(f(x) = \tanh (s(x-\pi))\) on \([0, 2\pi]\) in the space \(V\) with basis \({\psi}_i(x) = \sin((2i+1)x)\), \(i\in{\mathcal{I}_s} = \{0,\ldots,N\}\). Make a movie showing how \(u=\sum_{j\in{\mathcal{I}_s}}c_j{\psi}_j(x)\) approximates \(f(x)\) as \(N\) grows. Choose \(s\) such that \(f\) is steep (\(s=20\) may be appropriate).

**Hint.**
One may naively call the `least_squares_orth` and `comparison_plot`
from the `approx1D` module in a loop and extend the basis with
one new element in each pass. This approach
implies a lot of recomputations.
A more efficient strategy is to let `least_squares_orth`
compute with only one basis function at a time and accumulate
the corresponding `u` in the total solution.

Filename: `tanh_sines_approx1.py`.

Make a movie where the steepness (\(s\)) of the \(\tanh\) function in
*Exercise 6: Approximate a steep function by sines* grows in “time”,
and for each value of the steepness, the movie shows how the
approximation improves with increasing \(N\).
Filename: `tanh_sines_approx2.py`.

Given a function \(f(x)\) on an interval \([0,L]\), look up the formula for the coefficients \(a_j\) and \(b_j\) in the Fourier series of \(f\):

\[f(x) = a_0 + \sum_{j=1}^\infty a_j\cos \left(j\frac{\pi x}{L}\right)
+ \sum_{j=1}^\infty b_j\sin \left(j\frac{\pi x}{L}\right){\thinspace .}\]

Let an infinite-dimensional vector space \(V\) have the basis functions
\(\cos j\frac{\pi x}{L}\) for \(j=0,1,\dots,\infty\) and
\(\sin j\frac{\pi x}{L}\) for \(j=1,\dots,\infty\). Show that the least squares
approximation method from the section *Approximation of functions* leads to a
linear system whose solution coincides with the standard formulas for
the coefficients in a Fourier series of \(f(x)\) (see also
the section *Fourier series*). You may choose

\[{\psi}_{2i} = \cos\left( i\frac{\pi}{L}x\right),\quad
{\psi}_{2i+1} = \sin\left( i\frac{\pi}{L}x\right),\]

for \(i=0,1,\ldots,N\rightarrow\infty\).

Choose \(f(x) = \tanh(s(x-\frac{1}{2}))\) on \(\Omega=[0,1]\), which is
a smooth function, but with considerable steepness around \(x=1/2\)
as \(s\) grows in size.
Calculate the coefficients in the Fourier expansion by
solving the linear system, arising from the least squares or Galerkin
methods, by hand. Plot
some truncated versions of the series together with \(f(x)\) to show how
the series expansion converges for \(s=10\) and \(s=100\).
Filename: `Fourier_approx.py`.

Use interpolation/collocation with uniformly distributed points and Chebychev nodes to approximate

\[f(x) = -\tanh(s(x-\frac{1}{2})),\quad x\in [0,1],\]

by Lagrange polynomials for \(s=10\) and \(s=100\), and \(N=3,6,9,11\).
Make separate plots of the approximation for each combination of
\(s\), point type (Chebyshev or uniform), and \(N\).
Filename: `tanh_Lagrange.py`.

Consider a domain \(\Omega =[0,2]\) divided into the three P2 elements \([0,1]\), \([1,1.2]\), and \([1.2,2]\).

For P1 and P2 elements, set up the list of coordinates and nodes
(`nodes`) and the numbers of the nodes that belong to each element
(`elements`) in two cases: 1) nodes and elements numbered from left to
right, and 2) nodes and elements numbered from right to left.
Filename: `fe_numberings1.py.`.

Repeat *Exercise 10: Define nodes and elements*, but define the
data structures `vertices`, `cells`, and `dof_map` instead of
`nodes` and `elements`.
Filename: `fe_numberings2.py`.

*Exercise 10: Define nodes and elements* describes a element mesh
with a total of five elements, but with two different element and
node orderings. For each of the two orderings,
make a \(5\times 5\) matrix and fill in the entries that will be nonzero.

**Hint.**
A matrix entry \((i,j)\) is nonzero if \(i\) and \(j\) are nodes in the
same element.

Filename: `fe_sparsity_pattern.pdf`.

Perform hand calculations to find formulas for the coefficient matrix and right-hand side when approximating \(f(x) = \sin (x)\) on \(\Omega=[0, \pi]\) by two P1 elements of size \(\pi/2\). Solve the system and compare \(u(\pi/2)\) with the exact value 1.

Filename: `sin_approx_P1.py`.

Given

\[f(x) = \tanh(s(x-\frac{1}{2}))\]

use the Galerkin or least squares method with finite elements to find
an approximate function \(u(x)\). Choose \(s=40\) and try
\(N_e=4,8,16\) P1 elements and
\(N_e=2,4,8\) P2 elements.
Integrate \(f{\varphi}_i\) numerically.
Filename: `tanh_fe_P1P2_approx.py`.

Solve *fem:approx:exer:tanh* using \(N_e=1,2,4\) P3 and P4
elements. How will a collocation/interpolation method work in
this case with the same number of nodes?
Filename: `tanh_fe_P3P4_approx.py`.

The theory *(7.1)* from
the section *fem:approx:fe:error* predicts that the
error in the Pd approximation of a function
should behave as \(h^{d+1}\). Use experiments to verify this
asymptotic behavior (i.e., for small enough \(h\)).
Choose two examples: \(f(x)=Ae^{-\omega x}\) on \([0,3/\omega]\)
and \(f(x) = A\sin (\omega x)\) on \(\Omega=[0, 2\pi/\omega]\) for
constants \(A\) and \(\omega\). What happens if you try
\(f(x)=\sqrt{x}\) on \([0,1]\)?

**Hint.**
Run a series of experiments: \((h_i,E_)\), \(i=0,\ldots,m\), where \(E_i\)
is the \(L^2\) norm of the error corresponding to element length \(h_i\).
Assume an error model \(E=Ch^r\) and compute \(r\) from two successive
experiments:

\[r_i = \ln (E_{i+1}/E_i)/\ln (h_{i+1}/h_i),\quad i=0,\ldots,m-1{\thinspace .}\]

Hopefully, the sequence \(r_0,\ldots,r_{m-1}\) converges to the true \(r\), and \(r_{m-1}\) can be taken as an approximation to \(r\).

Filename: `Asinwt_interpolation_error.py`.

Approximate the step function

\[\begin{split}f(x) = \left\lbrace\begin{array}{ll}
1 & x < {1/2},\\
2 & x \geq {1/2}
\end{array}\right.\end{split}\]

by 2, 4, and 8 P1 and P2 elements. Compare approximations visually.

**Hint.**
This \(f\) can also be expressed in terms of the Heaviside function \(H(x)\):
\(f(x) = H(x-{1/2})\).
Therefore, \(f\) can be defined by

```
f = sp.Heaviside(x - sp.Rational(1,2))
```

making the `approximate` function in the
`fe_approx1D.py` module an obvious candidate to solve the
problem. However, `sympy` does not handle symbolic integration
with this particular integrand, and the `approximate` function faces a problem
when converting `f` to a Python function (for plotting) since
`Heaviside` is not an available function in `numpy`. It is better to make
special-purpose code for this case or perform all
calculations by hand.

Filename: `Heaviside_approx_P1P2.py.`.

Assume we have basis functions \({\varphi}_i(x,y)\) in 2D that are
orthogonal
such that \(({\varphi}_i,{\varphi}_j)=0\) when \(i\neq j\).
The function `least_squares` in the
file approx2D.py will then spend much time on computing off-diagonal terms
in the coefficient matrix that we know are zero.
To speed up the computations, make a
version `least_squares_orth` that utilizes the orthogonality among the
basis functions. Apply the function to approximate

\[f(x,y) = x(1-x)y(1-y)e^{-x-y}\]

on \(\Omega = [0,1]\times [0,1]\) via basis functions

\[{\varphi}_i(x,y) = \sin (p\pi x)\sin(q\pi y),\quad i=q N_x + p
{\thinspace .}\]

**Hint.**
Get ideas from the function `least_squares_orth` in
the section *Orthogonal basis functions* and
file approx1D.py.

Filename: `approx2D_lsorth_sin.py`.

Consider approximation of some \(f(x)\) on an interval \(\Omega\) using
the least squares or Galerkin methods with P1 elements. Derive
the element matrix and vector using the
Trapezoidal rule *(8.1)* for calculating
integrals on the reference element. Assemble the contributions, assuming
a uniform cell partitioning, and show that the resulting linear system
has the form \(c_i=f(x_{i})\) for \(i\in{\mathcal{I}_s}\).
Filename: `fe_P1_trapez.pdf`.

We shall approximate the function

\[f(x) = 1 + \epsilon\sin (2\pi nx),\quad x\in \Omega = [0,1],\]

where \(n\in\mathbb{Z}\) and \(\epsilon \geq 0\).

**a)**
Sketch \(f(x)\) and find the wave length of the function.

**b)**
We want to use \(N_P\) elements per wave length. Show that the number
of elements is then \(nN_P\).

**c)**
The critical quantity for accuracy is the number of elements per
wave length, not the element size in itself. It therefore suffices
to study an \(f\) with just one wave length in \(\Omega = [0,1]\).
Set \(\epsilon = 0.5\).

Run the least squares or projection/Galerkin method for \(N_P=2,4,8,16,32\). Compute the error \(E=||u-f||_{L^2}\).

**Hint.**
Use the `fe_approx1D_numint` module to compute \(u\) and use
the technique from the section *Computing the error of the approximation* to
compute the norm of the error.

**d)**
Repeat the set of experiments in the above point, but
use interpolation/collocation based on the node points to
compute \(u(x)\) (recall that \(c_i\) is now simply \(f(x_{i})\)).
Compute the error \(E=||u-f||_{L^2}\).
Which method seems to be most accurate?

Filename: `P1_vs_interp.py`.

Extend the approx2D.py code to 3D
applying ideas from the section *Extension to 3D*.
Use a 3D generalization of the test problem in the section *Implementation (3)* to test the implementation.
Filename: `approx3D.py`.

Redo *Exercise 19: Use the Trapezoidal rule and P1 elements*, but use P2
elements and Simpson’s rule based on sampling the integrands at
the nodes in the reference cell.

Filename: `fe_P2_simpson.pdf`.