3
$\begingroup$

Let $V$ be a $k$-dimensional subspace of $\mathbb{R}^n$, with $k < n$. Choose an orthonormal basis $(a_1, \ldots, a_k)$ of $V$ and let $A$ be the $k \times n$ matrix whose rows are $(a_1^T, \ldots , a_k^T)$. Define a linear mapping $L: V \rightarrow \mathbb{R}^k$ by

$ L(x) = Ax. $

From the definition of $A$ it is clear that

$ L^{-1}(y) = A^T y. $

Therefore, $L$ has an inverse while $A$ (being rectangular) has not. This seems odd...


While writing this it came to me that $A$ probably isn't the matrix representing $L$ in any basis, since such should be a $k \times k$ matrix, but I am a bit confused - what then is $A$?

  • 0
    These are the subtle points of linear algebra that shock graduate students the second time around when they realize they didn't really know it anywhere near as well as they thought they did.2012-01-28

2 Answers 2

3

It seems people would like me to answer my own question... well then, why not.


My original assumption was that if a linear mapping $L$ can be written as

$ L(x) = Ax $

for some matrix $A \in \mathbb{R}^{k \times n}$, then $A$ represents $L$ with respect to some basis (in fact, the canonical bases of $\mathbb{R}^n$ and $\mathbb{R}^k$ in this case, as $L$ seems to go from $\mathbb{R}^n$ to $\mathbb{R}^k$).

However, there is a subtelty about the domain of $L$ in the problem as posted originally - here, $L$ goes from a $k$-dimensional subspace of $\mathbb{R}^n$ to $\mathbb{R}^k$. Therefore, both the domain and range of $L$ are actually $k$-dimensional vector spaces, and any matrix representing $L$ will necessarily be $k \times k$.

Now, what does such a matrix look like? To find it, choose bases for the domain $V$ and range $\mathbb{R}^k$ of $L$. For example, $(a_1, \ldots, a_k)$ and $(e_1, \ldots, e_k)$ will do, in which case the matrix will turn out to be the $k \times k$ identity matrix (which is obviously invertible).

Thus, to answer the question in the title, if a linear map is invertible, any matrix representing it will be so as well.

2

I think that a great part of your difficulty was that you were suffering from a mathematical education that hadn’t paid sufficient attention to the importance of the domain, the target space, and the image (=“range”) of a map. Do you know the following about ordinary maps $f\colon X \to Y$ between sets?

The map $f$ is one-to-one if and only if it has a left inverse, i.e. there is a map $g\colon Y \to X$ such that $g\circ f=$ identity on $X$. And $f$ is onto (meaning that the image is equal to all of the target space) if and only if $f$ has a right inverse, i.e. there is a map $h\colon Y \to X$ such that $f\circ h=$ identity on $Y$. The latter is not quite trivial: in fact it depends on (and I think is equivalent to) the Axiom of Choice. The proofs are easy enough, nothing fancy, and if you haven’t seen the facts, you should spend a little time to prove them for yourself.

The significance of the above is that the same results are true for maps between vector spaces. You have constructed a left inverse of the inclusion map $V\to {\mathbb{R}}^n$, which of course is a one-to-one map, but your new map is a left inverse only. Try composing them in the opposite direction!

  • 0
    Yes, $A$ is the coordinate matrix of a transformation from all of $\mathbb{R}^n$ to $\mathbb{R}^k$, which happens to restrict to the identity on $V$.2012-01-29