Begin with a probability space $(\Omega,\mathcal{F},P)$ ($\mathcal{F}$ is called a $\sigma$-algebra on $\Omega$, $P$ is called a probability measure). The collection of all Borel sets of $\mathbb{R}$ is denoted by $\mathcal{B}(\mathbb{R})$. A mapping $X:\Omega \to \mathbb{R}$ is a (real-valued) random variable if it is $\mathcal{F}$-measurable, that is, $\{ \omega \in \Omega: X(\omega) \in B \}$ is in $\mathcal{F}$ for each $B \in \mathcal{B}(\mathbb{R})$. Write $P[\{ \omega \in \Omega: X(\omega) \in B \}]$ as $P[X \in B]$. As a mapping of $B$, this is a probability measure on $\mathcal{B}(\mathbb{R})$, which can be denoted as $P_X$ and called the distribution of $X$.
Now, to say that a random variable $X$ on $(\Omega,\mathcal{F},P)$ follows a uniform distribution on the interval $[0,1]$ simply means that $P_X$ is the measure on $\mathcal{B}(\mathbb{R})$ satisfying $P_X (\mathbb{R}-[0,1]) = 0$ and $P_X (I) = b-a$ for any interval $I \subset [0,1]$ with endpoints $a. For the simplest example, take the probability space $(\Omega,\mathcal{F},P) = ([0,1],\mathcal{B}([0,1]),P)$, where $P$ is the restriction to $[0,1]$ of the measure just defined above, and define $X:\Omega \to \mathbb{R}$ by $X(\omega) = \omega$. Then, for any $B \in \mathcal{B}(\mathbb{R})$, $ P_X (B) = P[X \in B] = P[\{ \omega \in [0,1]:X(\omega ) \in B\} ] = P[[0,1] \cap B], $ from which it follows that $X$ is uniform on $[0,1]$. (Note that it is not essential that $\Omega$ be the set $[0,1]$.)