3
$\begingroup$

Is there a quick way to invert the $n\times n$ Vandermonde matrix with columns $(1, x, x^2, ..., x^{n-1})$ where $x$ takes values $0,1,...,n-1$ (in ascending order from left to right)?

Perhaps by row operations $(A|I)\to (I|A^{-1})$ where $A$ is the Vandermonde matrix, and $I$ the identity matrix? I don't see how to do it though... or maybe there is another way?

Thank you.

  • 1
    See [here](http://www.proofwiki.org/wiki/Inverse_of_Vandermonde's_Matrix).2011-11-19
  • 1
    The "quick" ($O(n^2)$) method is to use [divided differences](http://mathworld.wolfram.com/DividedDifference.html) (i.e., the [Björck-Pereyra algorithm](http://dx.doi.org/10.1090/S0025-5718-1970-0290541-1).)2011-11-19
  • 0
    Somewhat related, the determinant of such a matrix, for fixed $n$, is $G(n+1)$, where $G$ the [Barnes G-function](http://en.wikipedia.org/wiki/Barnes_G-function)2011-11-19

1 Answers 1

3

(Might as well...)

The Björck-Pereyra idea is based on the fact that the usual divided-difference algorithms for polynomial interpolation can be recast as matrix decompositions of the Vandermonde matrix. I'll suggest having a look at the original paper, as well as Golub and Van Loan's treatment for details; for convenience, I'll only give here the final results.

First, we set up notation. Let

$$\mathbf V(x_0,\dots,x_n)=\begin{pmatrix}1&1&\cdots&1\\x_0&x_1&\cdots&x_n\\\vdots&\vdots&\ddots&\vdots\\x_0^n&x_1^n&\cdots&x_n^n\end{pmatrix}$$

and introduce the diagonal matrix

$$\mathbf D_k=\mathrm{diag}\left(\underbrace{1,\dots,1}_{k+1},\frac1{x_{k+1}-x_0},\dots,\frac1{x_n-x_{n-k-1}}\right)$$

and the lower bidiagonal matrix

$$\mathbf L_k(h)=\left(\begin{array}{ccc|cccc} 1&&&&&&\\&\ddots&&&&&\\&&1&&&&\\\hline&&&1&&&\\&&&h&\ddots&&\\&&&&\ddots&\ddots&\\&&&&&h&1\end{array}\right)$$

where the upper left block is a $k\times k$ identity matrix.

Then,

$$\mathbf V^{-1}=\left(\mathbf L_0(1)^\top\cdot\mathbf D_0\cdots \mathbf L_{n-1}(1)^\top\cdot\mathbf D_{n-1}\right)\left(\mathbf L_{n-1}(x_{n-1})\cdots\mathbf L_1(x_1)\cdot\mathbf L_0(x_0)\right)$$

It takes less effort than usual to multiply diagonal and bidiagonal matrices together, giving Björck-Pereyra an edge over vanilla Gaussian elimination.


Here's some sundry Mathematica code:

lmat[n_Integer, k_Integer, h_] := SparseArray[{Band[{1, 1}] -> 1, 
   Band[{2, 1}] -> PadLeft[ConstantArray[-h, {n - k}], n]},
  {n + 1, n + 1}]

dmat[k_Integer, vec_?VectorQ] := 
 DiagonalMatrix[Join[ConstantArray[1, k + 1], 
   1/(Drop[vec, k + 1] - Drop[vec, -k - 1])]]

vaninv[vec_?VectorQ] := 
 Module[{n = Length[vec] - 1}, 
  Apply[Dot, 
    Table[Transpose[lmat[n, k, 1]].dmat[k, vec], {k, 0, n - 1}]] .
  Apply[Dot, 
    Reverse[MapIndexed[lmat[n, First[#2] - 1, #1] &, Most[vec]]]]]

Here's a test:

vec = {a, b, c, d, e, f};

vaninv[vec].LinearAlgebra`VandermondeMatrix[vec] ==
LinearAlgebra`VandermondeMatrix[vec].vaninv[vec] ==
IdentityMatrix[Length[vec]]
// FullSimplify

should yield True.