4
$\begingroup$

I am trying to work out some problems and then I realise something funny. Say I have a matrix : $ A=\begin{bmatrix} -2 & 1 & 3\\ -1.5 & 1 & 2\\ -1.5 & 1 & 2 \end{bmatrix}$

The $\operatorname{rank}(A)=2$. I want to find the null space of the matrix $A$: $ \begin{bmatrix} -2 & 1 & 3\\ -1.5 & 1 & 2\\ -1.5 & 1 & 2 \end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix}=0$

And I got it through the usual steps: $ N(A)=t\begin{bmatrix} 2\\ 1\\ 1 \end{bmatrix}, t \in \mathbb{R}$

But what surprises me is that the null space could be a combination of the columns of $A$! $ \begin{bmatrix} 2\\ 1\\ 1 \end{bmatrix}=-2\begin{bmatrix} -2 \\ -1.5 \\ -1.5 \end{bmatrix}-2 \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$

How is this possible? I thought if it in the null space, then it wouldn't be in the column space of $A$.

3 Answers 3

2

Depending on circumstances this is an "arithmetic coincidence" or has some intrinsic meaning.

To distinguish things properly I first consider the matrix $A=[a_{ik}]$ as a linear map $A:\ X\to Y$, where $X$ and $Y$ are $\ {\it different}\ $ copies of ${\mathbb R}^n$. The columns of $A$ are the images of the standard basis vectors $e_k\in X$, expressed as linear combinations of the standard basic vectors $f_i\in Y$. Any linear combination of these columns denotes some vector $y\in Y$.

On the other hand, the kernel of $A$ is a one-dimensional subspace of $X$, and it is generated by some vector $n\in X$ (defined up to a $\ne0$ scalar). This $n$ has certain coordinates $n_k$ with respect to the basis $(e_1,\ldots, e_n)$ of $X$, and it may happen that these $n_k$ can be represented numerically as a linear combination of the column vectors of the matrix $A$. When $X\ne Y$ this has no geometrical meaning whatsoever. The phenomenon dissappears when the basis of $X$ or of $Y$ is changed.

If however the matrix $A$ represents a linear map $A:\ X\to X$ then the detection that the $n_k$ defining the kernel of $A$ can be written as a linear combination of the columns of $A$ signalizes a geometric fact about $A$, namely that ${\rm ker}(A)\subset{\rm im}(A)$. This feature of $A$ is not so simple to visualize geometrically as, e.g., a projection is. A very simple matrix of this kind is given by

$A\ :=\ \left[\matrix{0&1&0 \cr 0&0& 1\cr 0 &0&0\cr}\right]$

where ${\rm ker}(A)=\langle e_1\rangle \subset\langle e_1, e_2\rangle={\rm im}(A)\ .$

4

It's a somewhat common mistake. Even though $\dim \ker A + \dim \operatorname{Im} A$ equals the dimension of the space, it doesn't mean at all that the two have a null intersection: just think about the fact that some non zero matrixes verify $A^2=0$ (then $\operatorname{Im} A \subset \ker A$ and it's possible to have equality).

The correct statement is: there is a subspace $F$ such that $F \cap \ker A = 0$ (on which $A$ is injective) and such that $\operatorname{Im} A_{|F} = \operatorname{Im} A$. It's the key to proving the rank theorem.

  • 0
    I think the $A^2=0$ comment is key here (with $A\neq0$). Then everything is sent to the nullspace by $A$.2011-11-24
3

Sure, $\operatorname{dom}A \cong \operatorname{ker}A \oplus \operatorname{im}A,$ but it's not an equality, the equality is actually $\operatorname{dom}A = \operatorname{ker}A \oplus \operatorname{im}{}^t\!\!A.$ [I like to do things abstractly, but you can just pretend that in the following $V = \mathbb{R}^n$, $W = \mathbb{R}^m$, and the inner products are simply Euclidean dot products, and work in terms of columns and rows] Indeed, consider $A: V \to W$, where V and W are finite dimensional, and consider two arbitrary inner products $\langle \cdot, \cdot \rangle_V$ and $\langle \cdot, \cdot \rangle_W$ on them respectively. There is a naturally defined operator $A^*: W^* \to V^*, \quad \alpha \mapsto \alpha \circ A.$ Using two inner products, you can define ${}^t\!\!A: W \to V$ using the formula $\langle {}^t\!\!Aw, \cdot \rangle_V = A^*\langle w, \cdot \rangle_W$ (you can check that in any orthonormal bases of $V$ and $W$ the matrix of ${}^t\!\!A$ has exactly the coefficients you'd expect). Now it is easy to see that $\operatorname{im}{}^t\!\!A = (\operatorname{ker} A)^\perp$. Indeed, for any $v \in \operatorname{ker}A$ and for any $w \in W$ we have $\langle {}^t\!\!A w, v \rangle_V = (A^*\langle w, \cdot \rangle_W)v = \langle Av, w \rangle_W = 0,$ so $\operatorname{im}{}^t\!\!A \subset (\operatorname{ker} A)^\perp$, and you can use dimensions to deduce the equality.