0
$\begingroup$

Definition 1: The vectors $v_1, v_2, . . . , v_n$ are said to span $V$ if every element $w \in V$ can be expressed as a linear combination of the $v_i$.

Let $v_1, v_2, . . . , v_n$ and $w$ be vectors in some space $V$. We say that $w$ is a linear combination of $v_1, v_2, . . . , v_n\;$ if \;w = \lambda_1v_1 +\lambda_2v_ 2+ · · · +\lambda_nv_n $\;$for some scalars $\lambda_1,\lambda_2, . . . ,\lambda_n$.

Definition 2: We say that vectors $v_1, v_2, . . . , v_r$ are linearly independent if the only solution of \lambda_1v_1 +\lambda_2v_2 + · · · +\lambda_rv_r = 0\; is given by$\; \lambda_1 = \lambda_2 = · · · = \lambda_r = 0\;$.

Definition 3: If $v_1, v_2, . . . , v_n$ are linearly independent and span $V$ we say that they form a basis of $V$ . The number $n$ is called the dimension.

My question: Does the Definition 3 of basis tell us that $\lambda_i$ has to be zero? Then $w=0$

4 Answers 4

0

I tend to think of spanning sets and linearly independent sets as being notions of bigness and smallness, respectively.

  • A spanning set is large enough so that we are able to represent every single vector in a vector space (as a linear combination of vectors in the spanning set).
  • A linearly independent set is small enough so that whenever a vector is representable (as a linear combination of vectors in the set) then this representation is unique.

A basis is the median between these two "extremes": It is large enough so that every vector can be represented by vectors in the set, but it is also small enough so that these representations are unique.

2

The idea behind those definitions is simple : every element can be written as a linear combination of the $v_i$'s, which means $w = \lambda_1 v_1 + \dots + \lambda_n v_n$ for some $\lambda_i$'s, if the $v_i$'s span $V$. If the $v_i$'s are linearly independent, then this decomposition is unique, because $ \lambda_1 v_1 + \dots + \lambda_n v_n = w = \mu_1 v_1 + \dots + \mu_n v_n $ implies that $(\lambda_1 - \mu_1) v_1 + \dots + (\lambda_n - \mu_n) v_n = w - w = 0$, therefore that $\lambda_i - \mu_i = 0$ hence $\lambda_i = \mu_i$. In other words when you have a basis, being given an element $w \in V$ is the same thing as being given its coefficients ; they define the same vector.

Hope that helps,

1

Definition 3 of basis tell us that if $w = \lambda_1v_1 +\ldots+\lambda_n v_n = 0$, then $\lambda_1 = \lambda_2 = \ldots = \lambda_n = 0$. It easy if you think about it in term of 2D orthogonal basis. There is no way you can get a zero vector by adding non-zero multiples of $v_1$ and $v_2$ together.

  • 0
    To prove that a *particular* set of vectors $\{v_1, v_2, \ldots, v_n \}$ spans a vector space $V$ of dimension $k$, then you have to show than any vector $w = (w_1, \ldots, w_k) \in V$ can be written as a linear combination of $v_i$'s. Check out Example 5 & equation (1) of these notes [PDF](http://www.math.sunysb.edu/~badger/notes/basis.pdf)2012-02-26
1

An equivalent way of stating Definition $3$: $\rm\: v_i$ form a basis of $\rm V$ iff for every $\rm v\in V$ there exists a unique solution $\rm c_i$ to $\:\rm v = c_1 v_1 +\:\cdots\: c_n v_n,\:$ i.e. vectors have unique coefficients $\rm c_i$ w.r.t. $\rm v_i$.

Note that the existence of a solution for all $\rm v$ is equivalent to saying that the $\rm v_i$ span V.

The uniqueness of the solution is equivalent to saying that the $\rm v_i$ are linearly independent. Indeed, uniqueness implies that the only rep of $0$ is that with all $\rm\:c_i = 0,\:$ hence the $\rm v_i$ are independent. Conversely, nonuniquess implies some $\rm v$ has two different reps, so their difference yields a rep of $0$ without all $\rm\: c_i = 0,\:$ hence the $\rm v_i$ are dependent.