2
$\begingroup$

I created a certain algorithm and now I try to prove it's convergence. Here is the essence of the algorithm:

Given are finite sets $A_i$, positive constants $\alpha_a$, and a sequence $ \vec{\delta}^{(k)} \in \mathbb{R}^n$: $ A_1,\; \ldots,\; A_n \subset \mathcal{A} $ $ B_a = \{i : a \in A_i\}$ $ \sum_{a\in \mathcal{A}}\alpha_a = n, \quad 0 < \alpha_a < |B_a| $ $\delta_i^{(0)} = 1 \quad i = 1,2,\; \ldots, \;n$ $ \beta_a^{(k)} = \sum_{i \in B_a} \delta_i^{(k)} $ $ \delta_i^{(k+1)} = \left(\sum_{a\in A_i} \frac{\alpha_a}{\beta_a^{(k)}}\right)^{-1} $ Prove that $\vec{\delta}^{(k)}$ converges.

I did some numerical experiments but I am not sure that the bounds on $\alpha_a$ are sufficient. You can make stronger assumptions if you want. I'm just lookgin for an idea how to prove this.

Edit: This may help: in experiments this function is always increases: $ \sum_{a \in A} \alpha_a \log\frac{\alpha_a}{\beta_a} - \sum^n_{i=1} \log \sum_{a\in A_i} \frac{\alpha_a}{\beta_a}$

Edit 2: You can assume $|A_i| \geq 2 $

  • 0
    It is$a$modified version of of Expectation-MAximization or as you prefer Minorization-Maximization algorithm. I designed it to maximize the functions given in the Edit but I'm not sure if it is dooing the right job.2011-02-17

1 Answers 1

1

It's not quite clear what you mean by "$\vec{\delta}^{(k)}$ converges to a unique point", since you've specified initial values, so if $\vec{\delta}^{(k)}$ converges this will by definition be to a unique point. I assume that you mean that it converges to a point which is independent of the initial values? If so, that is not the case, since multiplying the initial values by a constant factor results in the entire sequence being multiplied by a constant factor, so the limit could only be "unique" in the above sense up to a constant factor.

It seems that a useful way to approach these equations is to eliminate $\beta_a^{(k)}$ by substituting it from the upper iteration step into the lower iteration step, and then bringing the reciprocal on the right-hand side over to the left-hand side, yielding

$\sum_{a\in A_i} \frac{\alpha_a \delta_i^{(k+1)}}{\sum_{j\in B_a} \delta_j^{(k)}} = 1\;.$

If we drop the superscripts labeling the iterations steps, we get the equation that determines the fix point(s). Note a) that the sum of these equations over all $i$ is automatically satisfied, and b) all $n$ equations would be satisfied if $\sum_{j\in B_a} \delta_j$ were independent of $a$. So the iteration can be viewed as solving the problem of choosing weights $\delta_i$ for the sets such that for each point $a$ the sum over the weights of the sets in which it is contained is the same. This also makes sense of the fact that a constant factor in the weights is irrelevant.

  • 0
    Indeed. I want it to converge. I hope it converges.2011-02-17