3
$\begingroup$

Given $ F(x) = \left(\begin{matrix} -x_2 \\ x_1+2x_2 \\ 3x_1 - 4x_2 \end{matrix} \right), x = \left(\begin{matrix} x_1 \\ x_2 \end{matrix} \right)$ Prove whether $F$ is a linear function or not.

I've tried to prove it, but I'm not sure it's right:

$F(x) = Mx \Rightarrow M = \left(\begin{matrix} 0 & -1 \\ 1 & 2 \\ 3 & -4 \end{matrix}\right)$

Let there be $u,v \in \mathbb{R}^2$, then:

$\begin{eqnarray*} F(u+v) = M (u+v) = Mu+Mv = F(u)+F(v) \end{eqnarray*}$

and

$F(\lambda{u}) = M\lambda{u} = {\lambda}Mu = {\lambda}F(u)$

so the function $F$ is linear.

Using the matrix $M$ was not shown in any of the examples I've seen, I've just invented the technique, I think it's nicer (and almost certainly I've reinvented the wheel). What's the technical term for $M$?

I'm sure there is a missing step in arriving from the given function to the form $F(x)=Mx$, M looks like the coefficient vectors arranged column-wise.

Is the proof formally correct?

  • 0
    If you have not seen this method in your course: presumably you are supposed to check directly from the difinition of "linear" this time. Then, in a day or two, you will learn the matrix method of doing it, and you will appreciate it more at that time.2012-11-12

2 Answers 2

2

The main thing, is that there is a matrix $M$, such that $F(x)=M\cdot x$ for all vectors $x$. In fact for a mapping $F$ between vector spaces with fixed bases, this is equivalent to be linear. Then, that multiplying by a matrix is indeed a linear mapping, is straightforward: depend on the nice properties of matrix multiplication, and you proved it formally correctly.

So, what should be emphasized is that indeed $F(x)=M\cdot x$ for all $x$, but this can be easily seen by expanding the product on the right hand side.

$M$ is called the matrix belonging to the linear map $F$, with respect to the standard bases.

Let $V$ and $W$ be vector spaces, with basis $\mathcal B:=(b_1,..,b_n)$ in $V$ and basis $\mathcal C:=(c_1,..,c_m)$ in $W$. If $F:V\to W$ is a linear mapping, then define its matrix wrt. bases $\mathcal B$ and $\mathcal C$ as: $[F]^{\mathcal B}_{\mathcal C} := \left[ [F(b_1)]_{\mathcal C}\, ... [F(b_n)]_{\mathcal C} \right] $ where $[w]_{\mathcal C}$ denotes the culomn vector $\in \Bbb R^m$ of coordinates of $w$ in basis $\mathcal C$, i.e. $[w]=\pmatrix{\alpha_1\\ \vdots \\ \alpha_m} \iff w=\alpha_1 c_1+..+\alpha_m c_m. $ Then, by linearity one can check that for all $v\in V$ we have $[F]^{\mathcal B}_{\mathcal C}\cdot [v]_{\mathcal B} = [F(v)]_{\mathcal C}\ . $ Prove it first for the basis vectors $v=b_i$.

  • 0
    Thanks, this has cleared my mind :-)2012-11-12
1

You've shown that your map can be represented by a matrix and then shown that it must be linear based on the properties of matrix multiplication.

In fact, most mathematicians would view the problem the other way round: matrices are introduced as a neat way of encoding a linear map: so while it is certainly true that any map that can be represented by a matrix $A$ is linear, it is also true that any linear map can be represented by a matrix.

To see why this is: consider your example, where you are dealing with the map which takes $\left(\begin{matrix} x_1 \\ x_2 \end{matrix} \right)$ to $\left(\begin{matrix} -x_2 \\ x_1+2x_2 \\ 3x_1 - 4x_2 \end{matrix} \right)$. In mathematics, we say that this is an example of a map from $\mathbb{R}^3$ (vectors with three components) to $\mathbb{R}^2$ (vectors with two components). I will now show that every linear map from $\mathbb{R}^3$ to $\mathbb{R}^2$ can be represented by a matrix in this way. The general case for maps from $\mathbb{R}^m$ to $\mathbb{R}^n$ is treated in exactly the same way.

First note that we can write any vector $a=\left(\begin{matrix} a_1 \\ a_2 \end{matrix} \right)$ in $\mathbb{R}^2$ as $a_1\left( \begin{matrix} 1 \\ 0 \end{matrix} \right)+a_2\left( \begin{matrix} 0 \\ 1 \end{matrix} \right)=a_1e_1+a_2e_2$ and any vector $b= \left( \begin{matrix} b_1 \\ b_2 \\ b_3 \end{matrix} \right)$ in $\mathbb{R}^3$ as $b_1\left( \begin{matrix} 1 \\ 0 \\ 0 \end{matrix} \right) + b_2\left( \begin{matrix} 0 \\ 1 \\ 0 \end{matrix} \right) + b_3\left( \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right)=b_1e_1+b_2e_2+b_3e_3$. Now let $\alpha$ be a linear map from $\mathbb{R}^2$ to $\mathbb{R}^3$. Then, if $a\in \mathbb{R}^2$, $\alpha(a)=\alpha(a_1e_1+a_2e_2)=a_1\alpha(e_1)+a_2\alpha(e_2)$

Now, since for $i=1,2$, $\alpha(i) \in \mathbb{R}^3$, we can write $\alpha(e_i)=m_{1i}e_1+m_{2i}e_2+m_{3i}e_3$ where the $m_{ij}$ are the components of the images of the vectors $e_i$ under $\alpha$. Writing the $m_{ij}$ as a matrix $M=\left( \begin{matrix} m_{11} & m_{12} \\ m_{21} & m_{22} \\ m_{31} & m_{32} \end{matrix} \right)$

it is quick to check that applying $\alpha$ to the vector $a=\left( \begin{matrix} a_1 \\ a_2 \end{matrix}\right)$ is exactly equivalent to multiplying by the matrix $M$.

Your question asks the opposite: given a matrix $M$,does multiplication by $M$ always define a linear map. Your calculations (which are perfectly precise) show that the answer is yes, although it's such a basic result that I think showing that the map can be represented by a matrix is probably enough for that question.

Moral: for vector spaces over the real numbers, linear maps are matrices are linear maps. Finding a matrix representation is a great way to show that a map is linear.

I'm not wure what was intended for this question, but you could show that the map is linear directly (without inventing matrices). That might be a slightly nicer example, as it doesn't assume properties of matrix multiplication which are non-obvious (though trivial to prove).