2
$\begingroup$

I want to approach linear equations of the following form over the integers $\mathbb{Z}$:

$$x_1 + \cdots + x_n = 0.$$

I stepped over the sum set, which is defined as follows:

$$S + T = \{ x + y \mid x \in S, y \in T \}.$$

Now assume we know that $x_i \in S_i^0$, where each $S_i^0$ is a finite integer set. One can define the following mapping which sends $(S_1^k,\dots,S_n^k)$ to $(S_1^{k+1},\dots,S_n^{k+1})$:

$$S_i^{k+1} = (- \sum_{j \neq i} S_j^k) \cap S_i^k.$$

How many iterations $l$ does it take until the mapping reaches a fixed point, i.e. what is the smallest $l$ such that $(S_1^l,\dots,S_n^l) = (S_1^{l+1},\dots,S_n^{l+1})$? Is there a more direct way to compute the fixed point?

Bye

  • 0
    In the case of n=2 a simple geometric argument shows that l=<1 is sufficient. Right? But for n>2, is there also a fixed bound?2012-07-04
  • 0
    It looks the problem can be reduced to y_1 + .. + y_n + b = 0, by using y_i = a_i * x_i. We can also find sets T_i with y_i in T_i, by T_i = a_i * S_i. And then ask for fixpoint of (T_1,..,T_n).2012-07-06
  • 0
    Remark for the interested reader, if the lemma holds for sets S_i, it would also hold for integer intervals I_i, since an interval is only a special case of a set.2012-07-08

2 Answers 2

2

The fixed point is reached after at most one iteration.

Indeed $S_i^1=\{x_i\in S_i^0: \exists(x_j)_{j\ne i}\in\prod_{j\ne i} S_j^0: x_1+\dots+x_n=0\}$.

Therefore all such $x_j$ will remain in $S_j^1$, so we have $(x_j)_{j\ne i}\in\prod_{j\ne i} S_j^1$ and therefore $S_i^2=S_i^1$.

Now of course I don't know if that's what you want: such a reduction won't give you a way to find a particular solution, unless you apply your method recursively. For example, you could have $S_i^1=S_i^0$ for all $i$, even though there might only be $|S_i^0|=k$ solutions among the $k^n$ candidates.

  • 0
    Was also hypothesizing once that the fixpoint is reached after maximal 1 iteration. Did not yet have time to produce a counter example. What is exactly the argument behind S^2_i = S^1_i in the general case?2012-07-08
  • 0
    Ok I guess the argument runs along the following for a x_i in S^0_i \ S^1_i there was anyway no (x_j)_j<>i, so when "deleting" x_i from S^0_i it will not reduce any of the S^0_j for j<>i.2012-07-08
  • 0
    Maybe an alternate proof could use monotonicity of Minkowsky sum, i.e. A subset B implies A + C subset B + C. Not sure. But I am already happy to see the question settled.2012-07-08
  • 0
    @CookieMonster: It does prove that the mapping is monotonic, but I'm not sure how a proof would use that to prove idempotence, since in general $\sum_{j\ne i} S_j^1 \subsetneq \sum_{j\ne i} S_j^0$.2012-07-08
1

A trivial upper bound is $\sum\limits_{i=1}^n |S_i^0|$, since each $S_i^{k+1}\subseteq S_i^{k}$ and so each iteration which is not a fixed point reduces $\sum\limits_{i=1}^n |S_i^k|$ by at least $1$. For sufficiently large $n$ and large sets $S_i^0$ this bound may not be optimal.

  • 0
    I thought it is the other way around by construction of the mapping, i.e. S_i^k+1 subset S_i^k. But otherwise I agree, this is the worst case. But is it possible that it happens?2012-07-04
  • 0
    @CookieMonster Sorry, that was a typo.2012-07-04