On a more poetic way to express this. I'll take a shot :) Below I'll consider both finite-dimensional and infinite-dimensional vector spaces over an arbitrary field $k$. You can think $k = \mathbb{R}$ if you like, it's not crucial.
Consider a vector space $V$ over the field $k$. There is a function $S \mapsto kS$ that takes each subset $S \subset V$ to its span $kS = \operatorname{span}S$ - the minimal subspace of $V$ containing $S$, that is the set of all elements that can be expressed as finite linear combinations of elements of $S$.
Interesting facts about this function:
- $S \subset kS$ for any $S$,
- if $T \subset S$, then $kT \subset kS$,
- $k(kS) = kS$ for any $S$.
These properties mean that this function is a closure operator. I admit I do not know much about closure operators except for topological closure operators, so it's just an aside in my answer :)
What's interesting about this function, though, is that it allows us to restate things about linear dependence in a more concise way.
Definition: a subset $S \subset V$ is called linearly dependent if there exists a proper subset $T \subsetneq S$ such that $kT = kS$. Otherwise we call it linearly independent.
Proposition: if a subset $S \subset V$ is linearly independent, then all $T \subsetneq S$ are also linearly independent.
Proof: Suppose there is $Q \subsetneq T$ such that $kQ = kT$. Then $k(Q \cup S \setminus T) = k(kQ \cup k(S \setminus T)) = k(kT \cup k(S \setminus T)) = k(kS) = kS,$ which contradicts the premise.
Consider all linearly independent subsets of $V$ ordered by inclusion.
Proposition: There is at least one maximal subset: one that's not included in any other.
Proof: Consider a chain $S_1 \subset S_2 \subset \ldots$, where each $S_i$ is linearly independent. Then $S = \bigcup_i S_i$ is also linearly independent: suppose $T \subsetneq S$ has the property $kT = kS$, then $T \cap S_i \subsetneq S_i$ has the property $kS_i = kS \cap kS_i = kT \cap kS_i = k(T \cap kS_i)$, thus by linear independence of $S_i$ we have $T \cap S_i = S_i$ for all $i$. But then $T = \bigcup_i (T \cap S_i) = \bigcup_i S_i = S$, which contradicts the premise. Therefore $S$ is indeed linearly independent. Apply the Zorn lemma to complete the proof.
Proposition: if $S \subset V$ is a basis of $V$, then $kS = V$.
Proof: Suppose $v \in V \setminus kS$. Then $S \subsetneq S \cup \{v\}$, and $S \cup \{v\}$ is linearly independent, which contradicts the maximality of the basis.
Definition: we call such a maximal linearly independent subset $S \subset V$ a basis of $V$.
Proposition: Let $S$ be a basis of $V$, and $U$ be a vector space over $k$. Then for every function $f: S \to U$ there is a unique linear operator $\varphi: V \to U$ such that for each $s \in S$ we have $\varphi(s) = f(s)$.
Proof: For any (finite) linear combination $v = a_1 s_1 + \ldots + a_k s_k$ we have $\varphi(v) = a_1 f(s_1) + \ldots a_k f(s_k)$, thus $f$ is uniquely defined on $kS = V$.
UPD: looks like the dimension theorem cannot be easily recast into these terms. Oh well :(
It gets even more poetic once we bring in category theory :)