Here is an account of what I understand to be the Bayesian approach, which I learned from Jaynes. Having read Jaynes, I am currently deeply suspicious of non-Bayesian approaches.
We'll restrict our attention to the slightly simpler case of a biased coin. Suppose the coin has some unknown probability $p$ of turning up heads, hence $1 - p$ of turning up tails. The first question is what your priors are regarding the distribution of possible values of $p$. For example, if you are already 100% confident that $p = \frac{1}{2}$, then no amount of evidence can shift this opinion. (This is why it is dangerous for Bayesians to be 100% confident of anything; see also 0 And 1 Are Not Probabilities.)
For simplicity, we'll use a uniform prior: that is, we'll assume initially that $p$ is equally likely to be any real number in $[0, 1]$. Now suppose we flip the coin $n$ times and observe $k$ heads and $n - k$ tails. Then by Bayes' theorem, the posterior distribution of $p$ has probability density function
$\frac{ {n \choose k} x^k (1 - x)^{n-k} }{ \int_0^1 {n \choose k} x^k (1 - x)^{n-k} \, dx }.$
This is a Dirichlet distribution. From here, to obtain a maximum likelihood estimate we need to find $x$ maximizing $x^k (1 - x)^{n-k}$. Taking logarithms and then derivatives, we find unsurprisingly that the maximum occurs when $x = \frac{k}{n}$. Note, however, that reporting only the maximum likelihood estimate is throwing away most of the information contained in the Dirichlet distribution, e.g. its variance. By doing more computations with the Dirichlet distribution you can write down, for example, the probability that $p$ is within a standard deviation of $\frac{k}{n}$.
In practice, the uniform prior is also unlikely to be an accurate description of your actual prior knowledge.