I know the converse is true, because if you can write 0 in two ways you can keep adding 0 to get an infinite number of linear combinations that sum to the same thing.
(Sorry, it's been years since I've had linear algebra.)
I know the converse is true, because if you can write 0 in two ways you can keep adding 0 to get an infinite number of linear combinations that sum to the same thing.
(Sorry, it's been years since I've had linear algebra.)
Technically it's at most one; for example, a collection consisting of a single non-zero vector is linearly independent, but if the vector space $V$ we're working in has dimension $>1$, there will be vectors that are not linear combinations of that vector (i.e. scalar multiples of it). If $n=\dim(V)$, then you are correct that it is actually exactly one.
If $v_1,\ldots,v_n$ are linearly independent, then by definition $\sum_{i=1}^nc_iv_i=0\implies c_i=0\text{ for all }i,$ so if $w=\sum_{i=1}^nc_iv_i=\sum_{i=1}^nd_iv_i$ then $0=w-w=\sum_{i=1}^n(c_i-d_i)v_i,$ hence $c_i-d_i=0$ for all $i$, hence $c_i=d_i$ for all $i$. Thus, if the $v_i$ are linearly independent, there is at most one way of writing a given vector as a linear combination of them.
Conversely, if there is at most one way of writing a given vector as a linear combination of the $v_i$, that is if $\sum_{i=1}^nc_iv_i=\sum_{i=1}^nd_iv_i\implies c_i=d_i\text{ for all }i,$ then if $\sum_{i=1}^nc_iv_i=0,$ we have $\sum_{i=1}^nc_iv_i=\sum_{i=1}^n0v_i\implies c_i=0\text{ for all }i,$ so the $v_i$ are linearly independent.
Yes. Any vector that can be expressed as a linear combination of linearly independent vectors can be expressed in one and only one way.
To see this, suppose that $\mathbf{v}$ can be expressed as a linear combination of $\mathbf{v}_1,\ldots,\mathbf{v}_n$, with scalars $\alpha_i$ and $\beta_i$: $ \mathbf{v} = \alpha_1\mathbf{v}_1+\cdots + \alpha_n\mathbf{v}_n = \beta_1\mathbf{v}_1+\cdots + \beta_n\mathbf{v}_n.$ Then: $\begin{align*} \mathbf{0} &= \mathbf{v}-\mathbf{v} \\ &=\bigl( \alpha_1\mathbf{v}_1+\cdots + \alpha_n\mathbf{v}_n\bigr)-\bigl( \beta_1\mathbf{v}_1+\cdots + \beta_n\mathbf{v}_n\bigr)\\ &= (\alpha_1-\beta_1)\mathbf{v}_1 + \cdots + (\alpha_n-\beta_n)\mathbf{v}_n. \end{align*}$ Since $\mathbf{v}_1,\ldots,\mathbf{v}_n$ are linearly independent, this means $\alpha_1-\beta_1=\alpha_2-\beta_2=\cdots=\alpha_n-\beta_n = 0$, so $\alpha_1=\beta_1,\ldots,\alpha_n=\beta_n$. That is: the two expressions are actually identical.
One way to remember this is:
A set $S$ of vectors of $V$ spans $V$ if and only if every vector of $V$ can be written as a linear combination of vectors in $S$ in at least one way. A set $I$ of vectors of $V$ is linearly independent if and only if every vector of $V$ can be written as a linear combination of vectors in $I$ in at most one way. A set $B$ of vectors of $V$ is a basis if and only if every vector of $V$ can be written as a linear combination of vectors in $V$ in exactly one way.