9
$\begingroup$

Extending this question, page 447 of Gilbert Strang's Algebra book says

What does it mean for a vector to have infinitely many components? There are two different answers, both good:
1) The vector becomes $v = (v_1, v_2, v_3 ... )$
2) The vector becomes a function $f(x)$. It could be $\sin(x)$.

I don't quite see in what sense the function is "infinite dimensional". Is it because a function is continuous, and so represents infinitely many points? The best way I can explain it is:

  • 1D space has 1 DOF, so each "vector" takes you on "one trip"
  • 2D space has 2 DOF, so by following each component in a 2D (x,y) vector you end up going on "two trips"
  • ...
  • $\infty$D space has $\infty$ DOF, so each component in an $\infty$D vector takes you on "$\infty$ trips"

How does it ever end then? 3d space has 3 components to travel (x,y,z) to reach a destination point. If we have infinite components to travel on, how do we ever reach a destination point? We should be resolving components against infinite axes and so never reach a final destination point.

  • 0
    @bobobobo: Is the answer I posted satisfactory? In short, you can move along any of infinitely many basis vectors, but to get to any given point, you only need to move along some finite number of them.2011-07-21

3 Answers 3

8

One thing that might help is thinking about the vector spaces you already know as function spaces instead. Consider $\mathbb{R}^n$. Let $T_{n}=\{1,2,\cdots,n\}$ be a set of size $n$. Then $\mathbb{R}^{n}\cong\left\{ f:T_{n}\rightarrow\mathbb{R}\right\} $ where the set on the right hand side is the space of all real valued functions on $T_n$. It has a vector space structure since we can multiply by scalars and add functions. The functions $f_i$ which satisfy $f_i(j)=\delta_{ij}$ will form a basis.

So a finite dimensional vector space is just the space of all functions on a finite set. When we look at the space of functions on an infinite set, we get an infinite dimensional vector space.

  • 0
    This is a nice answer, [but I still haven't found what I'm looking for](http://www.youtube.com/watch?v=GSv-lKwOQvE#at=1m03s)2011-07-19
6

I would also like to add to Eric's answer (it turned out that this was too long to be just a comment) that in general it's probably not a good idea to think of a vector as defined in terms of its components. Rather, one should probalby think of a vector as an element of an abstract vector space, and then, once a basis is chosen, you can represent the vector in that basis by its components with respect to that basis. If the (algebraic) basis is finite, then you can write the coordinates as usual as $(v_1,\ldots ,v_n)$. Simiarly, if the (algebraic) basis is countably infinite, the vector can be represented by its components as $(v_1,\ldots ,v_n,\ldots )$. In general, if the (algebraic) basis is indexed by an index set $I$, the components of a vector will be a function $f_v:I\rightarrow F$, where $F$ is the field you're working over.

In the second example you posted above, you can take $V$ to be the set of all bounded functions on $\mathbb{R}$ and you can take $F=\mathbb{R}$. Then, for each $x_0\in \mathbb{R}$, you may define the function $ \delta _{x_0}(x)=\begin{cases}1 & \text{if }x=x_0 \\ 0 &\text{otherwise}\end{cases} $ It turns out that the collection $\left\{ \delta _{x_0}|\, x_0\in \mathbb{R}\right\}$ forms an algebraic basis for $V$. This collection is naturally indexed by $\mathbb{R}$, and so by choosing this basis you can think of a function in $V$ as represnted by a function from $\mathbb{R}$ (the indexing set) to $\mathbb{R}$ (the field). In this case, that function was $\sin (x)$, which, because of how we chose our basis, agrees with the element of $V$ it is trying to represent, namely the original function $\sin$.

Hope that helps!

P.S.: I use the term algebraic basis to distinguish it from a topological basis, which is often more useful in infinite-dimensional settings.

1

I won't say anything more than Theo and Eric have already said, but...

As Eric says, every $\mathbb{R}^n$ can be seen as a space of functions $f: T_n \longrightarrow \mathbb{R}$.

That is, the vector $v = (8.2 , \pi , 13) \in \mathbb{R}^3$ is the same as the function $v: \left\{ 1,2,3\right\} \longrightarrow \mathbb{R}$ such that $v(1) = 8.2, v(2) = \pi$ and $v(3) = 13$.

So, the coordinates of $v$ are the same as its values on the set $\left\{ 1,2,3\right\}$, aren't they? Indeed, the coordinates of $v$ are the coefficients that appear in the right-hand side of this equality:

$ (8.2, \pi , 13) = v(1) (1,0,0) + v(2) (0,1,0) + v(3) (0,0,1) \ . $

On the other hand, the coordinates of $v$ are its coordinates in the standard basis of $\mathbb{R}^3$: $e_1 = (1,0,0), e_2 = (0,1,0)$ and $ e_3 = (0,0,1)$ and we can look at these vectors of the standard basis as functions too -like all vectors in $\mathbb{R}^3$. They are the following "functions":

$ e_i (j) = \begin{cases} 1 & \text{if}\quad i=j \\ 0 & \text{if}\quad i \neq j \end{cases} $

This is an odd way to look at old, reliable, $\mathbb{R}^3$ and its standard basis, isn't it?

Well, the point in doing so is to get hold for the following construction: let $X$ be any set (finite or infinite, countable or uncountable) and let's consider the set of all functions $f: X \longrightarrow \mathbb{R}$ (not necessarily continuous: besides, since we didn't ask $X$ to be a topological space, it doesn't make sense to talk about continuity). Call this set

$ \mathbb{R}^X \ . $

Now, you can make $\mathbb{R}^X $ into a real vector space by defining

$ (f + g)(x) = f(x) + g(x) \qquad \text{and} \qquad (\lambda f)(x) = \lambda f(x) $

for every $x \in X$, $f, g \in\mathbb{R}^X $ and $\lambda \in \mathbb{R}$.

And you would have a "standard basis" too in $\mathbb{R}^X$ which would be the set of functions $e_x : X \longrightarrow \mathbb{R}$, one for each point $x \in X$:

$ e_x (y) = \begin{cases} 1 & \text{if}\quad x=y \\ 0 & \text{if}\quad x \neq y \end{cases} \ . $

So, you see $\mathbb{R}^3$ can be seen as a particular example of a space of functions $\mathbb{R}^X$ if you see the number $3$ as the set $\left\{ 1,2,3\right\}$: $\mathbb{R}^3 = \mathbb{R}^\left\{ 1,2,3\right\} = \mathbb{R}^\mathbb{T_3}$ and the "coordinates" of a function $f\in \mathbb{R}^X$ are the same as its values $\left\{ f(x)\right\}_{x \in X}$.

(In fact, a function $f$ is the same as its set of values over all points of $X$, isn't it? -Just in the same way as you identify every vector with its coordinates in a given basis.)

Warning. I've been cheating a little bit here, because, in general, the set $\left\{ e_x\right\}_{x\in X}$ is not a basis for the vector space $\mathbb{R}^X$. If it was, every function $f\in \mathbb{R}^X$ could be written as a finite linear combination of those $e_x$. Indeed you have

$ f = \sum_{x\in X} f(x) e_x \ , $

but the sum on the right needs not to be finite -if $X$ is not so, for instance.

One way to fix this: instead of $\mathbb{R}^X$, consider the subset $S \subset \mathbb{R}^X$ of functions $f: X \longrightarrow \mathbb{R}$ such that $f(x) \neq 0$ just for a finite number of points $x\in X$. Then it is true that $\left\{ e_x\right\}_{x\in X}$ is a basis for $S$.

(Otherwise said, $\mathbb{R}^X = \prod_{x\in X} \mathbb{R}_x$ and $S = \bigoplus_{x\in X} \mathbb{R}_x$, where $\mathbb{R}_x = \mathbb{R}$ for all $x\in X$.)