1
$\begingroup$

$f: (0, \infty) \rightarrow \!R $ has finite limit in infinity if and only if:

$ \forall{\epsilon>0}\quad \exists{M} \quad\forall{x,y \in(M, \infty )}\quad | f(x)-f(y) |< \epsilon $

How do i prove it? I can do implication from limit definition to sentence above rather easily by triangle inequality in metric, but i don't know ho to do implication from this sentence to limit definition.

  • 0
    Take $x = M$. Then for all $y > x$, the value $f(y)$ cannot be further away from $f(x)$ than distance $\epsilon$. Therefore $f(y)$ cannot exceed $f(M) + \epsilon$ and cannot be smaller than $f(M) - \epsilon$. That means, all values of $f$ in $(M, \infty)$ lie in the interval $[f(M) - \epsilon, f(M) + \epsilon]$.2017-02-07
  • 1
    @avs. I feel like being picky: You should take $x=M+1$ (or anything greater than $M$), following the notation in the Q, and the strict inequality ($x$ & $y$ greater than $M$ ) in the "sentence".2017-02-07
  • 0
    @avs: how does that help in showing that $f(x) $ tends to a limit as $x\to\infty$?2017-02-07

2 Answers 2

0

Given an epsilon and M from the sentence above, I know that f(x) lies in the interval [f(M)-epsilon, f(M)+epsilon] for all x>M. If I now take epsilon half as smal and get a larger M2, I get a smaller interval; since f(x) with x>M2 must actually be in both intervals, it lies in their intersection.

I can repeat this with smaller and smaller epsilons, and I get a descending "chain" of intervals: a sequence of intervals that each contain the next. There is an important theorem stating that a descending chain of intervals that gets arbitrarily small -- like this -- has exactly one point that is in every set in the chain.

Convince yourself that that theorem is true, and that this point is your limit point from the standard definition of limit.

Now that you know this point exists, prove that it has the necessary properties of a limit point (that f approaches it in the epsilon delta sense).

0

The result mentioned in the question is called Cauchy's Principle of Convergence to a limit and is mostly used as a theoretical tool. It is slightly better than the definition of limit because unlike the definition of limit it does not require you to guess the value of limit beforehand. The proof which follows is very standard and can be found in almost any textbook of real-analysis.


First of all note that by choosing $\epsilon = 1$ we can ensure that there is an $M > 0$ such that $$|f(x) - f(y)| < 1$$ for all $x, y$ greater than $M$. Choosing $y = M + 1$ we can see that $$f(M + 1) - 1 < f(x) < f(M + 1) + 1$$ for all $x > M$. Thus $f$ is bounded as $x \to \infty$.

Hence $A = \limsup_{x \to \infty}f(x), B = \liminf_{x \to \infty}f(x)$ exist as real numbers with $A \geq B$. If we prove that $A = B$ then $\lim_{x \to \infty}f(x) = A = B$. On the contrary assume that $A > B$ and choose $\epsilon = (A - B)/2 > 0$. Now by definition of $A, B$ we can see that given any $N > 0$ there is an $x > N$ and a $y > N$ such that $|f(x) - A| < \epsilon/2$ and $|f(y) - B| < \epsilon/2$. Then we can see that $$f(y) < B + \frac{\epsilon}{2} < A - \frac{\epsilon}{2} < f(x)$$ and hence $$|f(x) - f(y)| > A - \frac{\epsilon}{2} - B - \frac{\epsilon}{2} = \epsilon$$ and note that this holds however large value of $N$ is chosen. This contradicts the assumption about $f$ given in question. Hence we must have $A = B$ and we are done.