5
$\begingroup$

Let $A$ be an $m \times n$ matrix. Show that, even though they may be of different sizes, both Gram matrices $K = A^TA$ and $L = AA^T$ have the same rank.

My attempt:

We have that $K$ and $L$ are Gram matrices so $K = A^TA = (A^TA)^T = AA^T = L$ and by definition we have that $\mathrm{rank}(A) = A^T$.

  • 0
    Can one of you show me how to properly prove this question please?2012-10-13

3 Answers 3

15

I think there are two good ways to see this and so I will give two proofs

1) If we are given two matrices $A$ and $B$, then the columns of $AB$ are linear combinations of the columns of $A$ and the rows of $AB$ are the linear combinations of the rows of $B$. This follows immediately from block multiplication $AB = \begin{pmatrix} A\mathbf{b_1} & \cdots & A\mathbf{b_n}\end{pmatrix} = \begin{pmatrix} \mathbf{a_1}^\mathrm{T}B \\ \vdots \\ \mathbf{a_m}^\mathrm{T}B\end{pmatrix}$ where $\mathbf{a_i}$ and $\mathbf{b_i}$ denote the row/column vectors of $A$ and $B$ respectively. From this, we can see that the columns of $A^\mathrm{T}A$ are a linear combination of the columns of $A^\mathrm{T}$, i.e. the rows of $A$. Likewise, the columns of $AA^\mathrm{T}$ are a linear combination of the columns of $A$. The row and column rank of a matrix coincide, and from this we can immediately conclude that that the ranks of the two matrices are the same.

2) For the second solution, we use a very general and useful trick for showing that a vector is $\mathbf{0}$. Notice that $\mathbf{x} = \mathbf{0} \iff \|\mathbf{x}\| = 0$. We exploit this fact.

Lemma: $\ker(A) = \ker(A^\mathrm{T}A)$

Proof: The forward inclusion is easy. If we have $A\mathbf{x} = \mathbf{0}$ then we immediately have $A^\mathrm{T}A\mathbf{x} = \mathbf{0}$ so that we have $\ker(A) \subseteq \ker(A^\mathrm{T}A)$. For the backwards inclusion, suppose that we have $A^\mathrm{T}A\mathbf{x} = \mathbf{0}$ We pre-multiply by $\mathbf{x}^\mathrm{T}$ to get $x^\mathrm{T}A^\mathrm{T}A\mathbf{x} = (A\mathbf{x})\cdot(A\mathbf{x}) = \|A\mathbf{x}\|^2 = 0$ It must follow that $A\mathbf{x} = \mathbf{0}$. Therefore we also have $\ker(A^\mathrm{T}A) \subseteq \ker(A)$ so that the lemma follows. $\square$

From this, we have $\mathrm{nullity}(A) = \mathrm{nullity}(A^\mathrm{T}A)$ where from the rank-nullity theorem we have $\mathrm{rank}(A) = \mathrm{rank}(A^\mathrm{T}A)$. Applying the same result to $A^\mathrm{T}$ will give you $\mathrm{rank}(A^\mathrm{T}) = \mathrm{rank}(A^\mathrm{T}A)$. The final result follows by noting $\mathrm{rank}(A) = \mathrm{rank}(A^\mathrm{T})$.

If anything is unclear, or if you would like to know the logic behind any of the steps, please do not hesitate to ask me.

  • 0
    Really Really nice and explicative answer! Thank you very much!2014-06-30
2

Excellent answers from @Euyu !!. I would like to add another way of proof. Assume $A=U\Sigma V^{T}$ is the Singular Value Decomposition (SVD). So $A^{T}A=V\Sigma ^{2}V^{T}$ and $AA^{T}=U\Sigma ^{2}U^{T}$ (follows from substituting the SVD). So clearly, the eigen values of the symmetric matrices $AA^{T}$ and $A^{T}A$ are the squares of singular values of $A$. Hence they have same number of non-zero eigen values and hence their rank should be the same.

1

Let's say that $A$ is rank one, so it is a column vector. What then is the rank of $AA^T$? HINT: When the matrix is row reduced, what happens?

  • 0
    Euyu gave me a detailed explanation . But still thanks for your help though!2012-10-13