You have an upper bound on the variance. Each person polled is either watching the show or not, so each observation is a $0$ or a $1$. Suppose it is $1$ with probability $p$ and $0$ with probability $1-p$, so you're seeking a confidence interval for $100\%\times p.$ The standard deviation for a single observation is then $\sqrt{p(1-p)}.$ For a sample of size $n$, the standard deviation for the sum of all those $0$s and $1$s is $\sqrt{np(1-p)}.$ Divide that sum by $n$ and then the standard deviation for the quotient is
$$
\frac{\sqrt{np(1-p)}} n = \frac{\sqrt{p(1-p)}} {\sqrt n}. \tag 1
$$
Now observe that
$$
\sqrt{p(1-p)} = \sqrt{\frac 1 4 - \left(p-\frac 1 2 \right)^2} \le \sqrt{\frac 1 4} = \frac 1 2.
$$
Hence the expression in $(1)$ above is $\le\dfrac 1 {2\sqrt n}.$
Rather than "the minimal number of samples" the correct terminology is "the minimum size of the sample".