Need some help with this exercise (Exercise 4.1 Probability Theory, E.T.Jaynes):
Suppose that we have vectors of events $\{H_1,...,H_n\}$ and $\{D_1,...,D_m\}$ which satisfy:
(1) $P(H_i H_j)=0$ for any $i\neq j$ and $\sum_iP(H_i)=1$
(2) $P(D_1D_2...D_m|H_i)=\prod_jP(D_j|H_i)$, for all $1\leq i\leq n$
(3) $P(D_1D_2...D_m|\overline{H_i})=\prod_jP(D_j|\overline{H_i})$, for all $1\leq i\leq n$
where $\overline{X}$ means the negation of $X$.
Claim: If $n>2$, then at most one of the following fractions
$\frac{P(D_1|H_i)}{P(D_1|\overline{H_i})},\frac{P(D_2|H_i)}{P(D_2|\overline{H_i})},...,\frac{P(D_m|H_i)}{P(D_m|\overline{H_i})}$ can differ from unity, for all $1\leq i\leq n$.
Are conditions (1)~(3) sufficient to establishing the claim? If so, how? Is there an intuitive explanation why it has to be true?
Edit: It maybe helpful to explain the motive a bit. Think of $\{H_1,...,H_n\}$ as a set of exhuastive and mutually independent (which means (1) applies) candidate hypotheses we want to test by some experiment that generates data $\{D_1,...,D_m\}$.
Define $O(H_i|D_1D_2...D_m)\equiv \frac{P(H_i|D_1D_2...D_m)}{P(\bar{H_i}|D_1D_2...D_m)}$ as the Odds that $H_i$ is true v.s false, giving data $D_1$ to $D_m$.
By this definition, $O(H_i|D_1D_2...D_m)=O(H_i)\frac{P(D_1D_2...D_m|H_i)}{P(D_1D_2...D_m|\bar{H_i})}$.
Now in such tests it is common that you can design your experiment so that the data $D_i$ are mutually independent, giving $H_i$. So (2) is true. (To be technically strict though, (2) is a weaker condition because it is implied by but doesn't imply mutual independence)
If the claim is true and you have more than two hypotheses to test, the experiment will serve its purpose only if (3) is false. To see why, suppose instead (3) holds, so the data are also independent giving the negation of $H_i$. Then we have
$O(H_i|D_1D_2...D_m)=O(H_i) \prod_j\frac{P(D_j|H_i)}{P(D_j|\bar{H_i})}$ (*)
But by the claim, only at most one of the fractions in the production can differ from $1$, which means at most one datum can be useful for improving upon the prior odds of a hypothesis ($O(H_i)$).
The lesson is that giving (1) and (2), even if $D_i$'s are physically or causally independent, (3) remains a strong ad hoc assumption that either reduces the information of additional data to triviality (if true) or calculates incorrect result by (*) (if false).