16
$\begingroup$

I'm writing a program that multiples a matrix by its transpose, and was trying to find efficiency hacks I could exploit considering that the two matrices being multiplied are related. Any ideas?

  • 0
    @Jonas: +1. Your comment can go as an answer.2011-03-22

1 Answers 1

17

After Sivaram's suggestion I'm turning my comment into an answer.

In general, $(AB)^\textrm{T}=B^\textrm{T}A^\textrm{T}$, so for the product of a matrix with its transpose, you have $AA^\textrm{T}=(AA^\textrm{T})^\textrm{T}$ is symmetric. You can also see this directly by observing that if the $ij$ entry of $A$ is $a_{ij}$, then the $ij$ entry of $AA^\textrm{T}$ is $\sum_{k=1}^n a_{ik}a_{jk}$, which is the same if you switch $i$ and $j$. Therefore if you compute the entries with $i\leq j$, you get the entries with $i>j$ for free. There are $1+2+3+\cdots+n=\frac{n(n+1)}{2}$ such entries, instead of the $n^2$ you generally need.

(I was thinking of $n$-by-$n$ matrices, but if your input is $m$-by-$n$, this will give you $\frac{m(m+1)}{2}$ entry computations, each of which is a sum of $n$ products.)

  • 4
    Good answer. Here's a related question, for which I don't know the answer: Can you compute this product in-place, that is, overwriting $A$? (Note that even computing the transpose in-place is not trivial if $A$ is not square: see http://en.wikipedia.org/wiki/In-place_matrix_transposition2011-03-22