(Might as well...)
The Björck-Pereyra idea is based on the fact that the usual divided-difference algorithms for polynomial interpolation can be recast as matrix decompositions of the Vandermonde matrix. I'll suggest having a look at the original paper, as well as Golub and Van Loan's treatment for details; for convenience, I'll only give here the final results.
First, we set up notation. Let
$\mathbf V(x_0,\dots,x_n)=\begin{pmatrix}1&1&\cdots&1\\x_0&x_1&\cdots&x_n\\\vdots&\vdots&\ddots&\vdots\\x_0^n&x_1^n&\cdots&x_n^n\end{pmatrix}$
and introduce the diagonal matrix
$\mathbf D_k=\mathrm{diag}\left(\underbrace{1,\dots,1}_{k+1},\frac1{x_{k+1}-x_0},\dots,\frac1{x_n-x_{n-k-1}}\right)$
and the lower bidiagonal matrix
$\mathbf L_k(h)=\left(\begin{array}{ccc|cccc} 1&&&&&&\\&\ddots&&&&&\\&&1&&&&\\\hline&&&1&&&\\&&&h&\ddots&&\\&&&&\ddots&\ddots&\\&&&&&h&1\end{array}\right)$
where the upper left block is a $k\times k$ identity matrix.
Then,
$\mathbf V^{-1}=\left(\mathbf L_0(1)^\top\cdot\mathbf D_0\cdots \mathbf L_{n-1}(1)^\top\cdot\mathbf D_{n-1}\right)\left(\mathbf L_{n-1}(x_{n-1})\cdots\mathbf L_1(x_1)\cdot\mathbf L_0(x_0)\right)$
It takes less effort than usual to multiply diagonal and bidiagonal matrices together, giving Björck-Pereyra an edge over vanilla Gaussian elimination.
Here's some sundry Mathematica code:
lmat[n_Integer, k_Integer, h_] := SparseArray[{Band[{1, 1}] -> 1, Band[{2, 1}] -> PadLeft[ConstantArray[-h, {n - k}], n]}, {n + 1, n + 1}] dmat[k_Integer, vec_?VectorQ] := DiagonalMatrix[Join[ConstantArray[1, k + 1], 1/(Drop[vec, k + 1] - Drop[vec, -k - 1])]] vaninv[vec_?VectorQ] := Module[{n = Length[vec] - 1}, Apply[Dot, Table[Transpose[lmat[n, k, 1]].dmat[k, vec], {k, 0, n - 1}]] . Apply[Dot, Reverse[MapIndexed[lmat[n, First[#2] - 1, #1] &, Most[vec]]]]]
Here's a test:
vec = {a, b, c, d, e, f}; vaninv[vec].LinearAlgebra`VandermondeMatrix[vec] == LinearAlgebra`VandermondeMatrix[vec].vaninv[vec] == IdentityMatrix[Length[vec]] // FullSimplify
should yield True
.