The goal is to maximize the following function: \begin{align} K_p(q) = q\log \frac{q}{p} + (1-q)\log \frac{1-q}{1-p} \end{align} where \begin{align} 0 \leq q \leq 1 \end{align} and $p \in (0,0.5)$ and is some constant. In other words, I'm maximizing the KL divergence in terms of one Bernoulli variable with success parameter $q$ with regard to another Bernoulli with success parameter $p$. The way I've set it up, I can see without formal calculations that it's maximized when $q=1$.
However, I'd like to use the KKT conditions to solve this problem formally and I'm running into problems doing that.
First, I set up the Lagrangian to minimize with the KKT conditions: \begin{align} \mathcal{L}(q,\mu,\lambda) & = -K_p(q) - \mu(1-q) - \lambda q \newline \textrm{where} \quad & \mu, \lambda \geq 0 \newline & 0 \leq q \leq 1 \newline & \mu(1-q) = 0 \newline & \lambda q = 0 \end{align} I then set the derivative in terms of $q$ to zero and plug it back in and get the following dual form: \begin{align} D(\mu,\lambda) & = -\mu + \log(1-p+pe^{\mu-\lambda}) \newline & = \log(e^{-\mu}-pe^{-\mu}+pe^{-\lambda}) \end{align} which I'm supposed to maximize.
But again, I run into the problem where the maximum of the dual is attained when $\mu,\lambda=0$ (and I got these values again by looking at the equation because setting derivatives to zero was infeasible). Which, according to the KKT conditions, means that q is neither zero nor one. But that can't be right, because the maximum of $K$ is attained when $q=1$.
And my obvious question is: what am I doing wrong? And how is this done correctly?