1
$\begingroup$

I am interested in the mathematical justification for methods of approximating functions.

In $x \in (C[a, b], ||\cdot||_{\infty})$ we know that we can get an arbitrarily good approximation by using high enough order polynomials (Weierstrass Theorem).

Suppose that $x \in (C[a, b], ||\cdot||_{\infty})$. Let $y_n$ be defined by linearly interpolating $x$ on an uniform partition of $[a, b]$ (equidistant nodes). Is it true that \begin{equation} \lim_{n \to \infty} ||y_n - x||_{\infty} = 0? \end{equation}

Do we need to impose stronger conditions? For example \begin{equation} x(t) = \begin{cases} t \sin\left(\frac{1}{t}\right), & t \in (0, \pi] \\ 0, & t = 0 \end{cases} \end{equation} is in $C[0, 1]$, however it seems to me that we cannot get a good approximation near $t = 0$.

More generally, can anyone recommend a reference containing the theory of linear interpolation and splines? It would have to include conditions under which these approximation methods converge (in some metric) to the true function.

1 Answers 1

0

Given an arbitrary function in $x \in C[a, b]$ and defining $y_n$ to be the linear interpolant on the uniform partition of $[a, b]$ with $n + 1$ nodes we have

\begin{equation} \lim_{n \to \infty} ||y_n - x||_{\infty} = 0. \end{equation}

Proof. As $x$ is continuous on the compact set $[a, b]$ it is uniformly continuous. Fix $\varepsilon > 0$. By uniform continuity there exists $\delta > 0$ such that for all $r, s \in [0, 1]$ we have

\begin{equation} |r - s| < \delta \quad \Rightarrow \quad |x(r) - x(s)| < \varepsilon. \end{equation}

Every $n \in \mathbb{N}$ defines a unique uniform partition of $[a, b]$ into $a = t_0 < \ldots < t_n = b$ where $\Delta t_n = t_{l+1} - t_l = t_{k+1} - t_k$ for all $l, k \in \{0, \ldots, n\}$. Choose $N \in \mathbb{N}$ so that $\Delta t_N < \delta$. Let $I_k = [t_k, t_{k+1}]$, $\,k \in \{1, \ldots, N\}$. Then for all $t \in I_k$ we have

\begin{equation} |y_N(t) - x(t)| \leq |y_N(t_k) - x(t)| + |y_N(t_{k+1}) - x(t)| < 2 \varepsilon, \end{equation}

where the first inequality is due to the fact that since $y_N$ is linear on $I_k$ we know that $y_N(t) \in [\min(y_N(t_k), y_N(t_{k+1}), \max(y_N(t_k), y_N(t_{k+1})]$.

Q.E.D.

If anyone knows a reference for a proof along these lines, then I would be grateful to know it.

Also, the function $x$ in the OP can certainly be well approximated near zero. Here is a picture of the function; the dashed lines are $y = t$ and $y = -t$.

enter image description here

  • 1
    There's a proof in deBoor's book -- "A Practical Guide to Splines". The key point is that your "linear interpolant" is a spline of degree 1, so all the machinery of spline theory is applicable.2013-01-06