1
$\begingroup$

I believe I understand both topics individually: When asked if a linear system spans a certain R^n, the question is, "can any point be reached in that dimensional plane?" Linear combination is multiplying a vector with a scalar and adding that to another vector being multiplied by a scalar and creating a linear system to solve to see if theres a unique solution or set of solutions.

My question is, how is it that finding a set of solutions or a unique solution, through the use of linear combination, will tell you if the system spans that dimensional plane? I'm having a hard time understanding how know that theres a set of numbers that solved all 3 equations means that it spans the plane. If anyone can explain this part in layman's terms, that would be great.

  • 1
    I am not sure I understand what you are asking here. As you mentioned we define a spanning set to be a set of vectors, $u_1, u_2, ..., u_n$ such that, for any vector $u \in U$ we can write $u$ as a linear combination of $u_1, u_2, ..., u_n$. Can you clarify what you want to know?2017-01-26
  • 0
    I guess I'm not sure how to word it. When we solve a system of linear equations, we are finding a point in space that each linear equation touches. So one point that they are all touch. I'm unsure how this proves that the system spans the entire dimensional plane....2017-01-26

1 Answers 1

0

It seems like your question is about the geometry of the relation between invertibility of a matrix and the linear independence of its columns. Invertibility means that one can always solve $Ax=b$ for $x$, no matter the $b$. Indeed, $A^{-1}$ exists iff the columns (or rows) are linearly independent. (Note that $Ax=b$ can be solvable (depending on $b$) even when $A$ does not have linearly independent columns; i.e. not spanning the space).

One can consider that $A$ is a map: it takes in a vector $x$ and produces a vector $Ax$. The question of solving $Ax=b$ is the same as asking whether $b$ is in the column space (i.e. span of the matrix's column vectors). This is by definition of matrix multiplication, which produces $Ax$ by taking a linear combination of the columns of $A$. This is the connection between span and solving linear systems. We can only guarantee that the system is solvable if the span hits all the space of $\mathbb{R}^n$; if there are parts of the space that cannot be reached, then choosing a $b$ in those areas means we cannot solve the system. In terms of linear independence, if two vectors out of $n$ are linearly dependent, then they give no new information about the space (and thus won't span it).

Another way to see this is by the Rank-Nullity Theorem (albeit in a less layman way), which says $\text{rank}(A)+\text{nullity}(A)=n$. Only if the dimension of the span of the columns is the whole space can we always solve the system.