This is a rewrite of the question. See the original here.
For a given probability field $(\Omega, A, P)$, a random variable defined on that field would be this (I'll use a real-valued random variable for simplicity): $X : \Omega \mapsto \mathbb{R}$
As I understand it from the comments, this should be interpreted as this: having observed a random outcome $\omega \in \Omega$, what are the consequences of that outcome?
As an example given by Dilip Sarwate, if I toss a coin and heads comes up, I win \$1.00; if tails comes up, I lose \$0.50. In that case, the random variable would be $X(H) = 1.00,\ X(T) = -0.50$.
If I got all of the above right, here's my question. Random variables (ie. consequences of a random outcome) seem like a very high-level concept. Why are they necessary within an abstract mathematical theory such as probability theory?