"How many observations are needed" is something that will vary from trial to trial. The best we can do is to determine a probability distribution.
If they are based on distinct samples (the "one at a time", "two at a time", "three at a time" don't overlap), the averages $\overline{X}_n$ are independent. The probability that you need more than $m$ observations (i.e. "one at a time", "two at a time", ... "$m$ at a time" are all out by at least $\epsilon$) is
$$\prod_{n=1}^m \mathbb P\left(\left|\overline{X}_n - \mu\right| \ge \epsilon\right)$$
If the random variables are normally distributed, $\overline{X}_n - \mu$ is normal with mean $0$ and variance $\sigma^2/n$. Thus $$\mathbb P\left(\left|\overline{X}_n - \mu\right| \ge \epsilon\right) = 2 \Phi(-\epsilon \sqrt{n}/\sigma) \sim \frac{\sqrt{2} \sigma}{\sqrt{n\pi} \epsilon} e^{-\epsilon^2 n/(2 \sigma^2)}\ \text{as}\ n \to \infty$$
where $\Phi$ is the standard normal CDF.
For other distributions it will be more complicated.
Note: it is tempting to try the Central Limit Theorem here, but that temptation should be avoided, because we are not looking at a fixed number of standard deviations here. Instead, Large Deviations theory could be used.