The independence is in the "population" of all possible outcomes of the experiments, but only one outcome---not all possible outcomes---eventuated for each experiment.
Say we do two independent experiments. Each time, the possible outcome is $1$, $2$, or $3$. Let's say the "population" or possible outcomes is distributed as follows: $ \begin{array}{|c|c|c|c|c|c|} \hline & 1 & 2 & 3 \\ \hline 1 & 1/12 & 2/12 & 1/12 & & 4/12=1/3 \\ \hline 2 & 1/12 & 2/12 & 1/12 & & 4/12 = 1/3 \\ \hline 3 & 1/12 & 2/12 & 1/12 & & 4/12=1/3 \\ \hline \\ \hline \\ & 3/12 = 1/4 & 6/12 = 1/2 & 3/12 = 1/4 \\ \hline \end{array} $ (For example, $1/12$ of the time, both experiments yield a $1$.)
The sum of the two random numbers is $ \begin{cases} 2 & \text{with probability }1/12 \\ 3 & \text{with probability }3/12 \\ 4 & \text{with probability }4/12 \\ 5 & \text{with probability }3/12 \\ 6 & \text{with probability }1/12 \end{cases} $
So compute a variance based on $1/3,1/3,1/3$ as probabilities of $1,2,3$ respectively, and compute a variance based on $1/4,1/2,1/4$ as probabilities of $1,2,3$ respectively, and add them together. Also compute a variance based on probabilities $1/12,3/12,4/12,3/12,1/12$ as probabilities of $2,3,4,5,6$ respectively. This last one should be the sum of the first two. That's what it means to say that the variance of the sum of two (or more) independent random variables is the sum of the variances. But if you do each of the experiments once and get, for example, $2$ the first time and $3$ the second time, what would it mean to say you have a variance for each one and you can add them together?