I think I just need to be pushed to the right formula or algorithm... Imagine you've got a "real" dice which is not an optimal one. And you want to know the confidence intervals for each result. So you rolled the dice a couple of times and get the following absolute probabilities as result:
#eyes #occurrences ------------------ 1 10 2 11 3 24 4 13 5 14 6 11
You actually want to know weather e.g. this 24 times 3 eyes is just a random result or weather it's really more probable. If so, how much more probable is it (for sure)? So I would like to calculate a 99%-confidence interval for the probabilities.
How to calculate this? I probably know this from statistics in university, but just forgot it... so you don't need to go to much into detail. Just need the right formula/algorithm to look for...
Thanks for your help.
--- edit --- Just to make clear, why I do not just lookup "Confidence Interval" at wikipedia. I would know how to calculate everything if there would be only two cases (e.g. like a coin... 0 and 1). Then I would be able to apply the formula, but I just didn't use such statistics for some years now and just don't see the solution how to reduce the problem. I just think about taking the result in question (e.g. 3 eyes) as "p" and all other results as "\not p"; does that work?