4
$\begingroup$

Let list $S_k$ be an arbitrary list of numbers (may not necessarily be ordered).

List $S_{k+1}$ is created via the cumulative sum of elements from list $S_k$.

For example if $S_k$ = [2,5,7,9] then $S_{k+1}$ = [2,7,14,23]

Is there a way to tell what numbers will be in list $S_n$ with $n>k$ without needing to create all the intermediate lists?

3 Answers 3

4

Using $S_k(i)$ to indicate the $i^{th}$ term of $S_k$, then

$$S_n(j) = \sum_{i \le j} {n-k-1+j-i \choose j-i} S_k(i)$$ so you only need to do weighted sums over the original sequence.

  • 0
    Fantastic answer, thank you2012-12-31
  • 0
    @Henry, nice solution. I am curious if there is some connection with the L-matrix solution that WimC gave?2012-12-31
  • 1
    @Amzoti: Yes: all the non-zero terms in the powers of his matrix are these binomial coefficients - as Marvis now seems to have noted2012-12-31
  • 1
    @Amzoti As I have indicated in the comment to WimC answer, we can prove that $L^k(i,j) = \dbinom{k-1+i-j}{k-1}$2012-12-31
  • 0
    @ Marvis and Henry, thank you both - nice result!2012-12-31
  • 1
    @Amzoti: the entries of $L^m$ are polynomials using the unsigned Stirling numbers first kind. If they are evaluated for some $m$ they result in binomials leading to Henry's formula.2012-12-31
  • 0
    @Gottfried - thanks, I always feel so stupid with the quickness of the replies and hence my participation on MSE!2012-12-31
  • 0
    @Henry Curious how one even goes about creating such a formula? I've been trying to do it myself but with great difficulty.2012-12-31
  • 1
    @Amzoti I can imagine that another link is: how do you efficiently compute these weights?2012-12-31
  • 0
    @Amzoti: I started with $S_k=[1,100,10000,1000000]$ and then looked at the next few iterations: there was a strong link to [Pascal's triangle](http://en.wikipedia.org/wiki/Pascal%27s_triangle)2012-12-31
  • 0
    @WimC, is there an answer to the question on an efficient calculation of the weights?2012-12-31
  • 0
    @user51819: You can use pattern-recognizing by simply looking at example powers of $L$. But more systematically one can find the symbolic description of entries in $L^k$ : determine the matrix-logarithm of $L$, say $LL=\log(L)$ , multiply it with a symbolic scalar, say $k$, and then build the matrix exponential $LK=\exp(k \cdot \log(L))$ again. You'll get polynomial expressions in each entry which are easily recognizable using the Stirling numbers 1st kind with the parameter *k*.2012-12-31
  • 0
    @Amzoti The $L^n$ method is $O(\log(n))$ which doesn't sound too bad.2012-12-31
  • 0
    @Henry Is there a faster way to compute the entries that do not require iterating over old S[i] repeatedly? (Should I make this a separate question?)2012-12-31
3

You get the next list by multiplication from the left with a lower triangular matrix $L$ with all $1$'s.

$$ L = \begin{pmatrix} 1 & 0 & \dotsc & 0 \\ \vdots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & 0 \\ 1 & \dotsc & \dotsc & 1 \end{pmatrix} $$

Then you can quickly find sequence $n$ by computing $L^n$ which can be done fairly quickly, e.g by the square-and-multiply method.

  • 0
    Is there any way to do it by iterating through the positions of S_n instead of needing a large matrix?2012-12-31
  • 1
    You don't actually need a large matrix, since all entries along a diagonal in $L^n$ are equal.2012-12-31
  • 3
    In fact $L^k(i,j) = \dbinom{k-1+i-j}{k-1}$2012-12-31
0

This is an answer to the request in some comments to show how to arrive at the formula for the entries of $L^h$

Here is an example how we can find the entries of $L^h$ symbolically. I do with the $4 \times 4 $ triangular matrix $L$ $$ M(h) = \exp( h \cdot \log (L)) = L^h $$ and get for $M$ $$ M(h) = \left[ \begin{array} {rrrr} 1 & . & . & . \\ 1h & 1 & . & . \\ \frac 12( 1 h^2+ 1 h) & 1h & 1 & . \\ \frac 16( 1h^3+ 3 h^2+ 2 h) & \frac 12 (1h^2+ 1h) & 1h & 1 \end{array} \right]$$ Here a trained eye recognizes the Stirling numbers 1st kind as coefficients at the powers of $h$ and because the structure of the matrix has this constant diagonals it is easy to make the formula for the transfer: $$ M(h)\cdot S_k = S_{k+h} $$ One more step shows, that the evaluation of the polynomials in the entries leads to binomial numbers, which is a well known property of the Stirling numbers first kind (the vandermonde-matrix LDU-decomposes into the matrices of Stirling-numbers 2st kind and of binomial coefficients and thus reduces by the multiplication with the matrix of Stirling numbers 1'st kind (which is the inverse of the 2nd-kind matrix) to binomials)


I had fun to proceed a bit. Factorizing the smbolic entries, assuming the hypothese that we have always the Stirling numbers 1st kind and the fractional cofactors the reciprocal fatorials give
$$ M(h) = \left[ \begin{array} {llll} 1 & . & . & . \\ 1(h) & 1 & . & . \\ \frac 12( h(h+1)) & 1(h) & 1 & . \\ \frac 16( h(h+1)(h+2)) & \frac 12 ( h(h+1)) & 1(h) & 1 \end{array} \right]$$ and this gives immediately the binomial coefficients $$ M(h) = \left[ \begin{array} {cccc} 1 & . & . & . \\ \binom{h}{1} & 1 & . & . \\ \binom{h+1}{2} & \binom{h}{1} & 1 & . \\ \binom{h+2}{3} & \binom{h+1}{2} & \binom{h}{1} & 1 \end{array} \right]$$

and a routine could solve the problem given the vector S of dimension n in the following way:

   T = 0*S ;  \\ initialize an empty array of size of S    b  = 1;    \\ contains the current binomial       for(j=1,n,        for(k=j,n, T[k]+=S[k+1-j]*b);        b *= (h+j-1)/j;        );    return(T); 

So we have $n^2/2$ operations by the looping.

  • 0
    Is there a way to do it in better than $n^2/2$ operations?2012-12-31
  • 0
    @user51819: hmm, I don't think that we can get below that $n^2/2$ scalar operations. If we however measure in terms of array/matrix-operations, the inner loop can be counted as one vector-operation, giving then an order of $n$-operations, and so on, but I think this is only (and even only remotely) interesting in the unlikely case that we deal with **huge** arrays $S$ and a near-hardware implementation of matrix-operations makes them similar to scalar operations in respect to their time-consumtion. (But I've no expertise for that latter question)2012-12-31
  • 0
    @user51819 : to reduce the computing time of pre-multiplication of a vector by a lower triangular matrix below the magical $n^2/2$ scalar operations you might look for something like "karatsuba" multiplication (don't have the name at hand) and discrete fast fourier transformation. I think I remember vaguely that those methods help to break that lower limit.2012-12-31