My professor was trying to explain something to me about confidence interval and I haven't been able to understand it.
There is a statement I think is true that she says is false. I can't understand why it is false.
The situation is that the (107.8, 116.2) is a 95% confidence interval for a mean statistic.
The statement is:
There is a 95% probability that the interval from 107.8 to 116.2 contains μ
My statistics professor says that the statement is false because the probability is either 0 or 1.
However, I am fairly sure that the statement is true considering that the definition of probability is:
the extent to which an event is likely to occur, measured by the ratio of the favorable cases to the whole number of cases possible.
My statistics professor has tried to explain to me her point but I have not understood it yet.
We did both agree that the following statement is true
This interval was constructed using a method that produces intervals that capture the true mean in 95% of all possible samples
I am fairly sure this statement says exactly the same thing as the first statement does. What am I missing?