Let $ \{ A,B \} $ be a partition of $ \mathbb{N} $ into two infinite subsets. For every $ r \in \mathbb{R}_{> 0} $, can we find an increasing sequence $ (a_{n})_{n \in \mathbb{N}} $ in $ A $ and an increasing sequence $ (b_{n})_{n \in \mathbb{N}} $ in $ B $ such that $ \displaystyle \lim_{n \rightarrow \infty} \frac{a_{n}}{b_{n}} = r $?
Is every positive real number the limit of a sequence of ratios obtained from a double partition of $ \mathbb{N} $?
-
1You could try to look at the first-quadrant grid points, and colour the ponts of the form $(b, a), a\in A, b\in B$. Then see if you can guarantee coloured points in an arbitrarily thin cone around any line through the origin with positive finite slope. If it is possible, this would lead to sequences, which would be increasing, although not necessarily strictly. This is just a rephrasing of the question, but it might yield some geometric intuition. – 2012-10-13
2 Answers
For $r=1$, note that there are infinitely many pairs of adjacent elements of $A$ and $B$, whose ratio tends to $1$.
For $r\ne1$, without loss of generality let $r\gt1$. Form sequences $a_n$, $b_n$ such that $|a_n/b_n-r|\lt\frac rn$, and for the sake of contradiction assume that at some point this is no longer possible. That is, for some $k,n\in\mathbb N$ there is no pair $a_n\in A$ and $b_n\in B$ such that $a_n,b_n\ge k$ and $|a_n/b_n-r|\lt\frac rn$.
Pick some $b\in B$ with $b\ge k$. If necessary, increase $n$ to ensure $r(1-\frac1n)\gt1$. Then none of the natural numbers between $br(1-\frac1n)$ and $br(1+\frac1n)$ is in $A$. Therefore they are in $B$, and we can apply the same argument to them. Thus for all $m\in\mathbb N$ none of the numbers between $br^m(1-\frac1n)^m$ and $br^m(1+\frac1n)^m$ is in $A$. But at some point $r^{m+1}(1+\frac1n)^{m+1}\gt r^m(1-\frac1n)^m$, so the intervals overlap and cover the rest of $\mathbb N$ from that point on, contradicting the fact that $A$ is infinite.
The contradiction shows that contrary to the assumption the sequence can be extended indefinitely.
(I glossed over some rounding issues, but I think it's clear enough that by going to sufficiently high numbers we can ensure that the intervals grow by a constant factor even when restricted to the natural numbers they contain.)
Many thanks go to Barto and Joriki for their valuable insights. What follows is simply a reorganization of the solutions of these two gentlemen. All credit, therefore, goes to them and to them only.
Let $ r \in \mathbb{R}_{>1} $, and choose an $ \epsilon \in (0,r - 1) $.
Claim For every $ N \in \mathbb{N} $, there exists $ (a,b) \in (A \cap \mathbb{N}_{>N}) \times (B \cap \mathbb{N}_{>N}) $ such that $ \displaystyle \left| r - \frac{a}{b} \right| < \epsilon $.
Proof of claim By way of contradiction, assume that the claim is false. Then there exists an $ N \in \mathbb{N} $ such that the inequality $ \displaystyle \left| r - \frac{a}{b} \right| < \epsilon $ has no solution for $ (a,b) \in (A \cap \mathbb{N}_{>N}) \times (B \cap \mathbb{N}_{>N}) $. As the inequality $ \displaystyle \left| r - \frac{a}{b} \right| < \epsilon $ is equivalent to $ b(r - \epsilon) < a < b(r + \epsilon) $, it follows that $ (b(r - \epsilon),b(r + \epsilon)) \cap \mathbb{N} \subseteq B $ for all $ b \in B \cap \mathbb{N}_{>N} $.
Now, the gist of Barto's and Joriki's observation is that for a fixed $ b \in B \cap \mathbb{N}_{>N} $, the following proposition is true: \begin{equation} \forall k \in \mathbb{N}: \quad (b(r - \epsilon)^{k},b(r + \epsilon)^{k}) \cap \mathbb{N} \subseteq B. \end{equation} We prove this by induction. When $ k = 1 $, the proposition is true by the previous paragraph. Next, suppose that the proposition is true for $ k = l $. Pick any $ n \in (b(r - \epsilon)^{l},b(r + \epsilon)^{l}) \cap \mathbb{N} $. As $ n \in B $ by the induction hypothesis, and as $ n > b(r - \epsilon)^{l} > N \cdot 1 = N $, we see that $ n \in B \cap \mathbb{N}_{>N} $. We may thus apply the argument in the previous paragraph to deduce that $ (n(r - \epsilon),n(r + \epsilon)) \cap \mathbb{N} \subseteq B $. However, this is true for all $ n \in (b(r - \epsilon)^{l},b(r + \epsilon)^{l}) \cap \mathbb{N} $, so we can conclude that $ (b(r - \epsilon)^{l+1},b(r + \epsilon)^{l+1}) \cap \mathbb{N} \subseteq B $. The proposition is therefore true for $ k = l + 1 $, and by induction, it is true for all $ k \in \mathbb{N} $.
Finally, observe that the collection of intervals $ \{ (b(r - \epsilon)^{k},b(r + \epsilon)^{k}) \}_{k \in \mathbb{N}} $ must cover $ \mathbb{R} $ from some point on. To prove this, it suffices to show that the left-endpoint of the interval $ (b(r - \epsilon)^{k+1},b(r + \epsilon)^{k+1}) $ is less than the right-endpoint of the interval $ (b(r - \epsilon)^{k},b(r + \epsilon)^{k}) $ for all $ k \in \mathbb{N} $ large enough. However, this is true because \begin{equation} \lim_{k \rightarrow \infty} \frac{b(r - \epsilon)^{k+1}}{b(r + \epsilon)^{k}} = \lim_{k \rightarrow \infty} \left( \frac{r - \epsilon}{r + \epsilon} \right)^{k} (r - \epsilon) = 0. \end{equation} We thus see that $ (b(r - \epsilon)^{k},\infty) \cap \mathbb{N} \subseteq B $ for all $ k \in \mathbb{N} $ large enough, which contradicts the hypothesis that $ A $ is infinite. The assumption is therefore false, and the claim is proven. Q.E.D.
The claim is now sufficient to prove the case when $ r > 1 $ or $ 0 < r < 1 $. The case $ r = 1 $ is settled by taking pairs of adjacent numbers in $ A $ and $ B $, as Joriki has mentioned.
-
0Hi Joriki. Yes, I finally got your point. Well, I've made some changes, so the argument should be correct this time. It turns out to be very similar to your argument, so I'm half-thinking of deleting this post anyway. Thanks! – 2012-10-14