I am having trouble with the notion of a random variable. This is how I understand it for now:
When we talk about random variables we use only their CDFs or PDFs. We can do this because there is a theorem that states that for any CDF (that satisfies some properties) we can construct a probability space and define a random variable on this space which CDF will be the same as one we had. Is it correct?
But what if we have some specific probability space (maybe based on experiment or smth) $(\Omega, F, P)$. It's not even always possible to define a specific random variable on it, right?
So, the thing that I don't understand is, what is more realistic, or what happens more often: a) we observe something that we are trying to interpret as a probability space and then we are trying to define a random variable or b) we see some events or results that follow some law and we state that these are the values of a normally distributed random variable and we are not interested in it's probability space? Or maybe something else?
I know that I am very unclear but there is something that I can't capture in these notions that is hard for me even to formulate.
Another question that I have just thought of is: is it possible to sum for example a normally distributed r.v. and uniformly distributed r.v.? I am sure that the answer is yes but to do so we have to define a probability space and define both variables on this space. How to do it?
Thanks!