0
$\begingroup$

Vector spaces need to fulfill a set of axioms to be labeled as such. These include the requisite of the zero vector being included in the set.

However, the usual way to reason about matrices hinges upon the concept of column and row spaces.

Since the immediate intuition (and definition) is the span of the column (or row) vectors, the working idea is clear. Yet, it would seem as though accepting that any old real-valued matrix will necessarily carry with it a column space is tantamount to saying than any two vectors (or one single vector) forms a space.

So, is it true that any set of vectors packed into a real-valued matrix is necessarily the basis of a vector space? If so, how can we see that they meet the criteria (for example, the inclusion of a zero additive element)?

2 Answers 2

2

The columns (or rows) of a matrix do NOT form a vector space (unless their entries are all identically zero).

They form a basis for a vector space. But this is trivial, since the span of any finite set of vectors is a vector space.

You should prove to yourself that the span of a finite set of vectors forms a vector space. Recall that this is just the set of all linear combinations of vectors from that set.

Comment: Note that translating such a subspace in a direction that is orthogonal to the subspace will produce a set which geometrically looks like the subspace but is not itself a subspace (translating a line away from the origin, or translating a plane away from the origin, for example). Such sets are not vector spaces (since they don't contain $0$), but they are objects called affine spaces.

Addendum: For example, suppose the spanning set is $S=\{v_1,v_2,\ldots,v_n\}$, let $\langle S \rangle$ denote the span of $S$, and suppose $x,y\in \langle S \rangle$. To see that $z\equiv ax+by\in \langle S \rangle$, just note that $$x\in\langle S \rangle \implies x = \sum_{i=1}^nx_iv_i$$ $$y\in\langle S \rangle \implies y = \sum_{i=1}^ny_iv_i$$ so $$z =ax+by = a\sum_{i=1}^nx_iv_i + b\sum_{i=1}^ny_iv_i=\sum_{i=1}^n(ax_i + by_i)v_i= \sum_{i=1}^nz_iv_i$$ where $z_i\equiv ax_i+by_i$ are scalars in the underlying field because that field is closed under multiplication and addition.

And so on.

  • 0
    The point I am stuck at is seeing that all finite set of vectors will include the zero vector. Unless the assumption built-in is that all vectors stretch from the origin, there can easily be a plane in $\mathbb R^3$ defined by two independent vectors, and running nowhere close to the $(0,0,0)$.2017-01-20
  • 0
    Just take the linear combination with all coefficients $0$.2017-01-20
  • 1
    @AntoniParellada : Yes, just note that $0=0v_1 + 0v_2 + \cdots + 0v_n$2017-01-20
  • 0
    So when we talk about a plane in, say, $\mathbb R^3$ created by the span of two vectors, we do imply that the plane always goes through zero?2017-01-20
  • 0
    @AntoniParellada : For starters, note that the span of a single nonzero vector is a line through the origin in the direction of that vector. The span is just all "scaled" copies of that vector, which will "fill up" the line.2017-01-20
  • 0
    So the answer is yes? Two vectors will necessarily form a plane through the origin, correct. That is the underpinning geometric interpretation...2017-01-20
  • 0
    You don't just imply it, it is immediate from the definition of "span". Suppose the two vectors are $v_1$ and $v_2$; then $0 = 0v_1+0v_2$, so you have the origin in the span.2017-01-20
  • 0
    They don't "form" a plane through the origin, they "span" a plane through the origin.2017-01-20
  • 1
    In fact, *every* vector subspace of $\mathbb R^n$ contains the origin for exactly this reason. Vector spaces are "pinned" to the origin. You may be able to translate the space away from the origin to get a line/plane/etc which does *not* pass through the origin, but the result is *not* a vector space. It is, however, something called an *affine space*, which basically looks like a vector space that might have been shifted away from the origin.2017-01-20
  • 0
    Thank you and +1. Even though I already accepted your answer, I wonder if eliminating the opening would be warranted after I edited the OP. Up to you.2017-01-20
1

I think the point is that any set $S$ of vectors from a vector space $V$ spans a subspace of $V$. That subspace, sometimes denoted span$(S)$ is the set of all (finite) linear combinations of vectors in $S$. It is standard and not particularly difficult to show that span$(S)$ is a vector space. (Even the empty set of vectors is considered to span the zero-space).

In particular the set of rows (or the set of columns) of a matrix spans a vector space called the row space (or column space) of the matrix.

  • 0
    The zero vector seems to be actually very intuitive to see as a space, but, kindly, and probably naively, it is the "standard and not particularly difficult" part, that I was after.2017-01-20
  • 0
    See for instance here: https://www.math.ucdavis.edu/~linear/old/notes17.pdf2017-01-20
  • 0
    Nice summary. But, see, they prove the obvious - that a couple of vectors form a plane, and that their span is a subspace because every linear combination falls in the plane, but they don't proof (probably it is obvious(?)) that zero is in the subspace...2017-01-20
  • 0
    Zero is in the subspace by taking a linear combination with all coefficients being zero.2017-01-20