Volker Strassen amazed the algorithmic world in 1969 by producing an $O(n^{\log_2 7})$ algorithm for matrix multiplication, based on a similar idea but applied recursively. The algorithm can also be adapted to matrix inversion and so to the solving of linear equations.
Since then, researchers have produced even better algorithms. The best known so far is by Coppersmith and Winograd (1987), who achieve $O(n^{2.376})$. Recent work suggests that matrix multiplication is actually possible in $O_\epsilon(n^{2+\epsilon})$ for any $\epsilon > 0$. Usually this is stated as $\omega = 2$, where $\omega$ is the infimum of all exponents $k$ such that matrix multiplication is possible in time $O(n^k)$.
Matrix multiplication is actually equivalent to matrix inversion, in the sense that they both have the same $\omega$. The proof can be found in the well-known CLRS.
All these algorithms - even Strassen's - are impractical (the constants hidden inside the big O are huge), and so never actually used. However, "blocking" is a wide-spread technique, since accessing memory "locally" is more efficient (due to the way caches work).