Both limits are $\frac12$ (but not with the same meaning).
As in your other question, introducing a random variable $X_K$ with binomial $(K,\theta)$ distribution, one first rewrite $P(K;\theta)$ as $ P(K;\theta)=\mathrm P(X_K\geqslant\tfrac12K)-\frac1{1+t_K(\theta)},\qquad t_K(\theta)=\left(\frac{1-\theta}\theta\right)^K. $ For every $\theta\leqslant\frac12$, $\mathrm P(X_K\geqslant\frac12K)\leqslant\frac12$ hence $P(K;\theta)\lt\frac12$.
On the other hand, assume that $\theta=\theta_K(x)$ where $\theta_K(x)=\frac12\left(1-\frac{x}{\sqrt{K}}\right)$, for some fixed positive $x$. Then, $\frac12K=\mathrm E(X_K)+x_K\cdot\sigma(X_K)$ with $x_K=x/\sqrt{4\theta_K(x)(1-\theta_K(x))}\sim x$. The central limit theorem implies that $\mathrm P(X_K\geqslant\frac12K)\to\gamma(x)$ when $K\to\infty$, where $\gamma(x)$ is the probability that a standard normal random variable is $\geqslant x$. Since $t_K(\theta_K(x))\to\infty$ when $K\to\infty$, one gets $ \liminf\limits_{K\to\infty}\left(\max\limits_{\theta\leqslant1/2}P(K;\theta)\right)\geqslant\lim\limits_{K\to\infty}P(K;\theta_K(x))=\gamma(x). $ Since $\gamma(x)\to\frac12$ when $x\to0^+$, this proves the claim.
Edit: About the fact that $\lim\limits_{K\to\infty}\left(\max\limits_{\theta\leqslant1/2}P(K;\theta)\right)\ne0$ although $\lim\limits_{K\to\infty}P(K;\theta)=0$, for every $\theta\leqslant1/2$, which seems to bother the OP, here is a simpler, analogous situation: for every $0\leqslant\theta\leqslant1$, consider $ Q(K;\theta)=K\cdot\theta^K\cdot(1-\theta). $ Then $\lim\limits_{K\to\infty}\left(\max\limits_{0\leqslant\theta\leqslant1}Q(K;\theta)\right)=1/\mathrm e$ although $\lim\limits_{K\to\infty}Q(K;\theta)=0$ for every $\theta$ in $[0,1]$ (and in this case, all the computations can be made explicit). The graph of the functions $Q$ is quite similar to the one you draw for $P(K;\theta)$, where one sees distinctly the argument of the maximum of each function $P(K;\ )$ accumulate at $\theta=1$ when $K\to\infty$.