Suppose we are given a prior distribution about an unknown parameter $\pi(\theta)$. Also we are given $f(x_{1}, \dots, x_n|\theta)$. We want to find $\pi(\theta|x_1, \dots, x_n)$. Now $\pi(\theta|x_1, \dots, x_n) = \frac{f(x_{1}, \dots, x_n|\theta) \ \pi(\theta)}{\int_{0}^{\infty} f(x_{1}, \dots, x_n|\theta) \ \pi(\theta) \ d \theta}$
The denominator, is in general, hard to compute. But why do many people use Gibbs sampling to approximate the denominator as opposed to the Metropolis/Metropolis Hastings Algorithm where $\alpha \neq 1$?