A simple chi-square test is often used for this.
The sum $ \sum \frac{(\text{observed} - \text{expected})^2}{\text{expected}} $ means this: the "expected" number of times you see a "$1$" is $1/6$ of the number of times you throw the die; the "observed" number is how many times you actually get a $1$. See this article. There would be six terms in this sum.
If the die is unbiased, then this sum has approximately a chi-square distribution with $6-1=5$ degrees of freedom when the number of trials is large.
If this is so large that a chi-square random variable with $5$ degree of freedom would rarely be that large, then you reject the null hypothesis that the die is unbiased. How rare is "rare" is essentially a subjective economic decision. It's how frequently you get "false positives", i.e. how frequently you'd reject the null hypothesis when the die is actually unbiased.
There's a dumb stereotypical value of $5\%$ that gets used in medical journals. I.e. one false positive out of $20$ is OK; anything more is not. Using $1\%$ might be more sensible.