In my math lectures, we talked about the Gram-Determinant where a matrix times its transpose are multiplied together.
Is $A A^\mathrm T$ something special for any matrix $A$?
In my math lectures, we talked about the Gram-Determinant where a matrix times its transpose are multiplied together.
Is $A A^\mathrm T$ something special for any matrix $A$?
The main thing is presumably that $AA^T$ is symmetric. Indeed $(AA^T)^T=(A^T)^TA^T=AA^T$. For symmetric matrices one has the Spectral Theorem which says that we have a basis of eigenvectors and every eigenvalue is real.
Moreover if $A$ is invertible, then $AA^T$ is also positive definite, since $$x^TAA^Tx=(A^Tx)^T(A^Tx)> 0$$
Then we have: A matrix is positive definite if and only if it's the Gram matrix of a linear independent set of vectors.
Last but not least if one is interested in how much the linear map respresented by $A$ changes the norm of a vector one can compute
$$\sqrt{\left
which simplifies for eigenvectors $x$ to the eigenvalue $\lambda$ to
$$\sqrt{\left
The determinant is just the product of these eigenvalues.
$AA^T$ is positive semi-definite, and in a case in which $A$ is a column matrix, it will be a rank 1 matrix and have only one non-zero eigenvalue which equal to $A^TA$ and its corresponding eigenvector is $A$. The rest of the eigenvectors are the null space of $A$ i.e. $\lambda^TA = 0$.
Indeed, independent of the size of $A$, there is a useful relation in the eigenvectors of $AA^T$ to the eigenvectors of $A^TA$; based on the property that $rank(AA^T)=rank(A^TA)$. That the rank is identical implies that the number of non-zero eigenvectors is identical. Moreover, we can infer the eigenvectors of $A^TA$ from $AA^T$ and vice versa. The eigenvector decomposition of $AA^T$ is given by $AA^Tv_i = \lambda_i v_i$. In case $A$ is not a square matrix and $AA^T$ is too large to efficiently compute the eigenvectors (like it frequently occurs in covariance matrix computation), then it's easier to compute the eigenvectors of $A^TA$ given by $A^TAu_i = \lambda_i u_i$. Pre-multiplying both sides of this equation with $A$ yields
$AA^TAu_i=\lambda_iAu_i$.
Now, the originally searched eigenvectors $v_i$ of $AA^T$ can easily be obtained by $v_i:=Au_i$. Note, that the resulted eigenvectors are not yet normalized.
One could name some properties, like if $B=AA^T$ then
$$B^T=(AA^T)^T=(A^T)^TA^T=AA^T=B,$$
so
$$\langle v,Bw\rangle=\langle Bv,w\rangle=\langle A^Tv,A^Tw\rangle.$$
The product $A^TA$ appears in a key role in the normal equations $A^TAx=A^T b$ for solving linear least squares problems.
Something that occurred to me while reading this answer for help with my homework is that there is a pretty common and important special case, if the linear operator A is normal, i.e. $A^TA=AA^T$. Note this is a stronger condition than saying that $A^TA$ is symmetric, which is always true. In this case, we have that $A$ is diagonalizable. As an obvious special case, $A$ is normal if $A$ is Hermitian (symmeric in the real case).
To see that $A$ normal implies $A^TA$ is diagonalizable, let $\lambda$ be a eigenvalue of $A^TA$ corresponding to the eigenvector x. Then $$(Ax,Ax)=(A^TAx,x)=\lambda(x,x)$$ and similarly $$(A^Tx,A^Tx)=(AA^Tx,x)=\lambda(x,x)$$ where I have used the normality of A. Subtracting these two equations, we have $$((A^T-A)x,(A^T-A)x)=0,$$ which by the definition of inner product implies that $$(A^T-A)x=0 =>A^Tx=Ax.$$ Note that of course this is true whenever $A=A^T$ if A is symmetric, but is a more general condition, since x is an eigenvector, rather than an arbitrary vector. Applying A to both sides of this equation, we have $$A^2x=A^TAx=\lambda x$$.
This shows that if x is an eigenvector of $A^TA$ then it is also an eigenvector of $A^2$, which in turn means it is an eigenvector of $A$ since powers of matrices share the same eigenspace. Therefore, we have constructed a full-rank set of eigenvectors of A, meaning that it is diagonalizable.
If you have a real vectorspace equipped with a scalar product, and an Orthogonal matrix $A$ then $AA^T=I$ holds. An Matrix is orthogonal if for the scalarproduct $\langle v,w \rangle = \langle Av, Aw \rangle$ holds for any $v,w \in V$
However I don't see a direkt link to the Gram-Determinant.
To my surprise no one mentioned yet that the root of the Gram determinant of an $n\times k$ matrix $A$ is the $k$-volume of the parallelepiped spanned by the $k$ column vectors of $A$.
Special? Yes! The matrix $AA^T$ is abundant with the Least Squares Finite Element Method in Numerical Analysis:
Software demo (Delphi Pascal):
program Demo;
type
matrix = array of array of double;
procedure Inverteren(var b : matrix);
{
Matrix inversion
pivots on diagonal
}
var
pe : double;
NDM,i,j,k : integer;
begin
NDM := Length(b);
for k := 0 to NDM-1 do
begin
pe := b[k,k];
b[k,k] := 1;
for i := 0 to NDM-1 do
begin
b[i,k] := b[i,k]/pe;
if (i = k) then Continue;
for j := 0 to NDM-1 do
begin
if (j = k) then Continue;
b[i,j] := b[i,j] - b[i,k]*b[k,j];
end;
end;
for j := 0 to NDM-1 do
b[k,j] := - b[k,j]/pe;
b[k,k] := 1/pe;
end;
end;
procedure inverse(b : matrix; var q : matrix);
{
Matrix inversion
without pivoting
}
var
NDM,i,j,k : integer;
p : matrix;
h : double;
begin
NDM := Length(b);
SetLength(p,NDM,NDM);
{ Transpose * original }
for i := 0 to NDM-1 do
begin
for j := 0 to NDM-1 do
begin
h := 0;
for k := 0 TO NDM-1 do
h := h + b[k,i]*b[k,j];
p[i,j] := h;
end;
end;
Inverteren(p);
SetLength(q,NDM,NDM);
{ (B^T*B)^(-1)*B^T = B^(-1) }
for i := 0 to NDM-1 do
begin
for j := 0 to NDM-1 do
begin
h := 0;
for k := 0 TO NDM-1 do
h := h + p[i,k]*b[j,k];
q[i,j] := h;
end;
end;
end;
procedure test(NDM : integer);
var
i,j,k : integer;
b,q : matrix;
h : double;
begin
SetLength(b,NDM,NDM);
Random; Random;
for i := 0 to NDM-1 do
begin
for j := 0 to NDM-1 do
begin
b[i,j] := Random;
end;
end;
inverse(b,q);
for i := 0 to NDM-1 do
begin
for j := 0 to NDM-1 do
begin
h := 0;
for k := 0 TO NDM-1 do
h := h + q[i,k]*b[k,j];
Write(h:5:2,' ');
end;
Writeln;
end;
end;
BEGIN
test(9);
END.
Output:
1.00 0.00 0.00 -0.00 0.00 0.00 0.00 0.00 -0.00 0.00 1.00 0.00 -0.00 0.00 0.00 0.00 0.00 -0.00 -0.00 0.00 1.00 0.00 -0.00 -0.00 -0.00 -0.00 0.00 -0.00 -0.00 -0.00 1.00 -0.00 -0.00 -0.00 -0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.00 -0.00 1.00 0.00 0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 1.00 -0.00 0.00 -0.00 -0.00 -0.00 0.00 -0.00 -0.00 -0.00 1.00 0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 -0.00 1.00