0
$\begingroup$

Let us say that there is a linear combination $c_1\mathbb{x_1} + c_2\mathbb{x_2} + c_3\mathbb{x_3} + c_4\mathbb{x_4}$.

where $x_k$ is a $n \times 1$ matrix.

How do I figure out the basis of the space generated by this linear combination formally?

For example, if $\mathbb{x_1} = \begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \mathbb{x_2}= \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1\end{bmatrix}, \mathbb{x_3} = \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \mathbb{x_4} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}$, how do I figure out the space of the basis of the linear combination of these matrices?

  • 1
    You need to read a chapter from a book or something. Alternatively, since we live in digital times, you can watch Gilbert Strang's lectures on MIT OCW (Search Youtube for: Gilbert Strang linear algebra).2012-08-29

2 Answers 2

1

All you have to do is row reduce! Put everything into one matrix, and get rid of as many rows as you can.

Logically, in a basis, first of all there cannot be the zero vector. Secondly, no vector in a basis can be a linear combination of any other vectors. In your example, $X_4 = X_3 + X_2$, so you can leave out $X_4$. You are left with your basis: $X_2, X_3$

If you want to row reduce with a more complex question, you have the following matrix: $\left[ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{array} \right]$

Move the zero row to the bottom for convenience.

$\left[ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right]$

Now subtract the first row + the second row from the third row.

$\left[ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right]$

We cannot row reduce anymore. Your answer is the rows that are not completely 0's. The first row, which maps to $X_2$, and the second row, which maps to $X_3$, is your basis. Remember, for the basis, you should use the original vectors, so the original $X_2$ and $X_3$ (coincidentally they are the same this time).

So your basis is $X_2, X_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1\end{bmatrix},\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}$

  • 1
    When you write "your basis is..." there's a possibility someone will take this to mean that's the only possible basis for the space in question. But in fact any two vectors $ax_2+bx_3,cx_2+dx_3$ will do, so long as $ad-bc\ne0$. I'm sure you know this, and I wasn't trying to say that what you wrote was wrong - I was just adding a little bit, lest anyone come away with the wrong impression.2012-08-29
0

You mean a basis for the linear transformation. The linear combination you mentioned induces a linear operator $T:\mathbb{R}^n\to\mathbb{R}^m$ (in your case $m=1$). Then,

$ T(\mathbf y)=\mathbf{X}\cdot\mathbf{y} $

where $\mathbf{X}\in M_{m\times n}(\mathbb{R})$ and $\mathbf{y}\in \mathbb{R}^n$

$ \mathbf X=\left[ {\begin{array}{cccc} x_1&x_2&\cdots&x_n \end{array} } \right] $

The space generated by these vectors ($\{\mathbf{x}_i\}_i$) is their span. Formally put, it is the image of $T$:

$ \operatorname{span}\{c_i\}_{i=1,2,\ldots,n}=\operatorname{im}T $

Recall the definition of the image of $T$. It is:

$ \operatorname{im}T = \left\{ \mathbf z| \mathbf z=T(\mathbf y); \mathbf y\in \mathbb{R}^n \right\} $

which sometimes appears as $T(\mathbb{R}^n)$. The image of $T$ is a linear subspace of $\mathbb{R}^m$. The dimension of the image of $T$ equals the rank of $T$, that is the maximum number of linearly independent columns of $C$. The rank-nullity theorem is a quite important result when working with dimensions of subspaces ($\operatorname{dim}\operatorname{im}T+\operatorname{dim}\operatorname{ker}T=n$.)

  • 0
    @GerryMyerson You are right. I updated my reply to follow the same notation.2012-08-29