3
$\begingroup$

I have a sequence of unit vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_k,\ldots$ with the following property: $\lim_{i\rightarrow\infty}\vec{v}_{i} = \vec{\alpha}$, i.e. the sequence converges to a finite unit vector.

As the sequence is generated by a poorly known process, I am interested in modelling $\vec{v}_k$ given previous generated vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_{k-1}$.

What are the available mathematical tools which allows me to discover a vector function $\vec{f}$ such that $\vec{v}_k\approx \vec{f}(\vec{v}_{k-1},\vec{v}_{k-2},\ldots,\vec{v}_{k-n})$, for a given $n$, in the $L_p$-norm sense?

EDIT: I am looking along the lines of the Newton's Forward Difference Formula, which predicts interpolated values between tabulated points, except for two differences for my problem: 1) Newton' Forward Difference is applicable for a scalar sequence, and 2) I am doing extrapolation at one end of the sequence, not interpolation in between given values.

ADDITIONAL INFO: Below are plots of the individual components of an 8-tuple unit vector from a sequence of 200:

Plot of individual components of an 8-tuple unit vector

  • 0
    @Tunococ: I have added plots of an 8-tuple unit vector sequence in the problem description. You may inspect the behaviour of how each component varies.2012-08-25

3 Answers 3

2

Seeing how well behaved the sequences are, i would suggest a Richardson extrapolation, performed on every coordinate seperately. The reason for this is, that the underlying model of the Richardson extrapolation seems to fit nice to your data. In the following I will make some assumptions that i hope are true. Mainly:

  • you are not as much interested in $v_{n+1}$ as you are in the limiting value $\alpha$
  • (your data is generated by some iterative calculations)

Let us assume, that the sequence of a single coordinate behaves like $ b_n \sim \alpha + c_1 n^{-1} + c_2 n^{-2} + ... \tag{*}$ This is a simple form of a Richardson extrapolation (which allows general exponents $n^{k}$). The nice thing about this simple version is how easy it can be solved.

Truncating the series (*) after $c_N n^{-N}$ we need $(N+1)$ datapoints to define all coefficients. As we are only interested in the variable $\alpha$ we basically have to solve a set of $(N+1)$ equations: $ \begin{align} b_n &\sim \alpha + c_1 n^{-1} + c_2 n^{-2} + ... + c_N n^{-N} \\ b_{n+1} &\sim \alpha + c_1 (n+1)^{-1} + c_2 (n+1)^{-2} + ... + c_N (n+1)^{-N} \\ &... \\ b_{n+N} &\sim \alpha + c_1 (n+N)^{-1} + c_2 (n+N)^{-2} + ... + c_N (n+N)^{-N} \end{align} $ Solving these equations for $\alpha$ we get a nice closed form $\alpha(n,N) = \sum_{k=0}^N \frac{b_{n+k}(n+k)^N(-1)^{k+N}}{k! (N-k)!} $ For a given $N$, $\alpha(n)$ thus defines a new sequence that (should) converge faster as long as the original sequence was close to what we assumed in the model (*).

Now assuming that you get your datapoints by recursively using the results of the previous calculation you might be tempted to do the Richardson extrapolation as soon as possible and append to the original sequence whatever point you get after Richardson+new interation. I would not advice you to do so however. It is possible that the Richardson extrapolation overshoots. The sequence would thus become an alternating series and all of a sudden the model (*) is not very good anymore. Here is what you can do instead:

  • calculate some 10-20 datapoints
  • use the richardson extrapolation on these points to calculate a possible $\alpha_1$
  • restard the recursive calculation at this $\alpha_1$ and iterate for another 10-20 rounds
  • use these last 10-20 rounds (completely disregarding the datapoints of before the first Richardson) to calculate another Richardson extrapolation
  • and so on...

If your data is not calculated recursively you can simply use the last $N$ points to do one Richardson extrapolation and take that value to be your result $\alpha$.

  • 0
    You are correct on the two assumptions. I am interested in obtaining an accurate approximation of the limiting $\vec{\alpha}$. The data is generated by computationally intensive iterations.2012-08-25
0

You probably want to use generalized vector finite differences on your known vectors, paying attention to whether a forward, backward, or central difference will be stable for your situation (reference a numerical differential equations text for the pitfalls), and then use Euler's method (or something more advanced) to extrapolate. You can either renormalize to the unit sphere after each step, or work the renormalization into your differential equation.

I may have more time in a few days to give you a more concrete example.

  • 0
    @Ang Zhi Ping: I wasn't able to add this comment to _his_ answer, but the Richardson extrapolation that 'example' showed is one o$f$ the "more advanced" Numerical Analysis extrapolation methods I mentioned.2012-08-25
0

A lot of methods effectively work by fitting a polynomial to your data, and then using that polynomial to guess a new value. The main reason for polynomials is that they are easy to work with.

Given that you know your functions have asymptotes, you may get better success by choosing a form that incorporates that fact, such as a rational function. If nothing else, you can always use a sledgehammer to derive a method -- e.g. use the method of least squares to select the coefficients of your rational functions.

  • 0
    I agree. Looking at the plots a Richardson extrapolation should work fine (which is basically a function of the form $v(n) = a+b n^{k_1}+c n^{k_2} + ...$)2012-08-25