2
$\begingroup$

I have a function $f(x)$ sampled at 11 $x$ positions: enter image description here

I want to approximate the function by a Chebyshev expansion:

$ \ f(x) \simeq \sum\limits_{i=0}^m c_i T_i(y) - \frac{1}{2}c_0,\qquad y=2(x-x_1)/(x_{11}-x_1) - 1 $

The first $3$ Chebyshev polynomials are:

$ T_0(y) = 1\qquad T_1(y) = y\qquad T_2(y) = 2y^2-1 $

The coefficents $ c_i $ are given by: $ c_i = \frac{2}{\pi}\int_{-1}^{1}\frac{f(y)T_i(y)}{\sqrt{1-y^2}}dy $

In practice I think it may be easiest to change integration variable to $t=\arccos(y) $ followed by numerical integration?

My main goal with this question however is to understand the coefficients. From what I understand the coefficients can be viewed as "projections" that tell how similar the function is to the basis functions.

I would therefore think that $c_i = 0$ means completely different from the $i$-th basis function: $ \frac{T_i(y)}{\sqrt{1-y^2}}. $

A high $ c_i $ value would mean very similar to the i'th basis function and a large negative $ c_i $ value would mean that $-f(x)$ is very similar to the $i$-th basis function.

Is this correct?

Is it possible to normalize the coefficients so that: $ -1 \leq c_i \leq 1 $?

Do you have any tips on how to "interpret" the polynomials (I am a physicist)?

Thanks in advance for any answers!

  • 0
    Interesting question! Never heard of chebychev approx before...2012-06-10

1 Answers 1

4
  1. Chebyshev polynomials are orthogonal vectors in the weighted $L^2$ space $L^2([-1,1],(1-x^2)^{-1/2})$. However, they are not normalized: for example, the norm of $T_0$ in this space is $\pi$, not $1$. The coefficient formula $c_i=\langle f,T_i\rangle$ applies only to orthonormal vectors. I think the correct version would be $c_0=\frac{1}{\pi}\langle f,T_i\rangle$ and $c_i=\frac{2}{\pi}\langle f,T_i\rangle$ for $i>0$. This is rigged so that if $f$ happens to be equal to $T_i$, the corresponding coefficient $c_i$ will be $1$.
  2. I just learned from Wikipedia that the set of $T_i$, $i< N$ is also orthogonal with respect to a discrete measure supported on the zeroes of $T_N$. Actually, Wikipedia article neglects to mention the index constraint $; a more reliable source is here; it even gives two different discrete orthogonality relations. I think that discrete orthogonality should be more efficient for numerical purposes: $c_i=\frac{1}{K_i}\sum_{k}f(x_k)T_i(x_k)$ is easier to implement than an integral.
  3. $c_i=0$ means that $f$ is orthogonal to $T_i$. Yes, one can interpret orthogonality as dissimilarity, but perhaps it's better to think of it as zero correlation.
  4. If $f$ is replaced by $100f$, the coefficients $c_i$ will be replaced by $100c_i$. So you cannot hope for $-1\le c_i\le 1$. However, you may be interested in the correlation coefficient $r_i=\frac{\langle f, T_i\rangle}{\|f\|\|T_i\|}$ which indeed lies in $[-1,1]$ and can be thought of as normalized $c_i$. The inner product and the norms here can be either from the weighted $L^2$ space in item 1, or from its discrete analogues in items 2.
  • 0
    OK. I get it. I agree that finding xk e.g by linear inter$p$olation and then using the discrete orthogonality sum to find the coefficients is easier than the integral :-). Thanks!2012-06-11