1
$\begingroup$

Say I roll a dice 600 times.

Theoretically, you should expect 100 sixes.

But, say, I only got 80. Would this be enough to expect bias?

I'm looking for a generally accepted percentage off, or a formula to calculate when you would expect it to be biased, but I'll happily receive anything else.

  • 0
    You have to be careful: if you are observing a very long series of tosses, and see an anomalously low number of $6$'s in the last $100$, you can't accuse the thrower of having switched to loaded dice. It is statistically wrong, and also may get you beat up. Drug companies play this game. They fund a large number of studies, and only announce the results of the good ones, meaning good for **them**.2012-02-23

1 Answers 1

0

A simple chi-square test is often used for this.

The sum $ \sum \frac{(\text{observed} - \text{expected})^2}{\text{expected}} $ means this: the "expected" number of times you see a "$1$" is $1/6$ of the number of times you throw the die; the "observed" number is how many times you actually get a $1$. See this article. There would be six terms in this sum.

If the die is unbiased, then this sum has approximately a chi-square distribution with $6-1=5$ degrees of freedom when the number of trials is large.

If this is so large that a chi-square random variable with $5$ degree of freedom would rarely be that large, then you reject the null hypothesis that the die is unbiased. How rare is "rare" is essentially a subjective economic decision. It's how frequently you get "false positives", i.e. how frequently you'd reject the null hypothesis when the die is actually unbiased.

There's a dumb stereotypical value of $5\%$ that gets used in medical journals. I.e. one false positive out of $20$ is OK; anything more is not. Using $1\%$ might be more sensible.

  • 0
    @MichaelHardy Thanks for all your help :)2012-02-23