3
$\begingroup$

I was working on something on Matlab couple of months back and found a tweak to speed up Gauss elimination on Matlab by dividing the original matrix into 4 block matrices and then solving them in a block-wise fashion.

I am however wondering about couple of things.

  1. What is the reason for the optimal "split parameter" to be $\approx \frac{2}{3}$? Is there a heuristic or quick argument for this?

  2. Why is the number of additions independent of how we block partition the matrix?

1 Answers 1

8

Volker Strassen amazed the algorithmic world in 1969 by producing an $O(n^{\log_2 7})$ algorithm for matrix multiplication, based on a similar idea but applied recursively. The algorithm can also be adapted to matrix inversion and so to the solving of linear equations.

Since then, researchers have produced even better algorithms. The best known so far is by Coppersmith and Winograd (1987), who achieve $O(n^{2.376})$. Recent work suggests that matrix multiplication is actually possible in $O_\epsilon(n^{2+\epsilon})$ for any $\epsilon > 0$. Usually this is stated as $\omega = 2$, where $\omega$ is the infimum of all exponents $k$ such that matrix multiplication is possible in time $O(n^k)$.

Matrix multiplication is actually equivalent to matrix inversion, in the sense that they both have the same $\omega$. The proof can be found in the well-known CLRS.

All these algorithms - even Strassen's - are impractical (the constants hidden inside the big O are huge), and so never actually used. However, "blocking" is a wide-spread technique, since accessing memory "locally" is more efficient (due to the way caches work).

  • 0
    Here are [Strassen's](http://dx.doi.org/10.1007/BF02165411) and [Coppersmith/Winograd's](http://dx.doi.org/10.1145/28395.28396) articles.2011-04-11