Well, it wasn't too tricky in the end. Let's write $S(p,k)$ for the probability of successfully transmitting a bit, when a repetition code of length $2k+1$ is used. As concluded by zyx (and myself) the question is interesting only, when $0
The claim is that for all $p$ in this range and all positive integers $k$ we have the inequality
$$
S(p,k+1)>S(p,k).
$$
To prove this assume that a repetition code of length $2k+3$ is used once. W.l.o.g. we can assume that a 0 was transmitted, so in the received bitstring of length $2k+3$ any bit is equal to $0$ with probability $1-p$ and equal to $1$ with the (cross-over) probability $p$.
The transmission is successful, iff the majority of the bits are zeros. Let us look at the substring $s$ of the first $2k+1$ bits. A majority of those are 0s with probability
$S(p,k)$.
In order to analyze the effect of the two additional bits we need to define two events:
Call $s$ a nearly successful string (NS), if it has exactly $k$ 0s and consequently exactly $k+1$ 1s. IOW, the correct message lost by a single vote. Similarly call $s$ a nearly failed string (NF), if the correct message won by a single vote, i.e. there were exactly $k+1$ 0s and $k$ 1s in $s$.
When does the receiver seeing all the $2k+3$ bits make a correct decision about the transmitted bit? We see that his/her decision will be the same as it would be with the initial $2k+1$ bits of $s$ except in the two cases, where two last bits are both 1, and the string $s$ is nearly failed, and also in the case, where the two last bits are both 0, and the string $s$ was nearly successful. So we get the formula
$$
S(p,k+1)=S(p,k)+(1-p)^2 P(NS)-p^2P(NF).
$$
Here the binomial distribution tells us that
$$
P(NS)={2k+1\choose k}p^{k+1}(1-p)^k,\qquad P(NF)={2k+1\choose k+1}p^k(1-p)^{k+1}.
$$
Therefore (the two binomial coefficients are equal)
$$
(1-p)^2P(NS)-p^2P(NF)={2k+1\choose k}p^{k+1}(1-p)^{k+1}(1-2p)>0.
$$
Hence $S(p,k+1)>S(p,k)$ as claimed.
– 2011-08-13