11
$\begingroup$

I came across this question

Does there exist a function $f: \mathbb{R}^+ \to \mathbb{R}^+$ such that $f(x+y)>f(x)(1+yf(x))$ and $x,y\in \mathbb{R}^+$

and I didn't know how to begin on it.

  • 0
    try $y=0$ in the equation...2011-08-01
  • 0
    @marwalix, please read the question carefully.2011-08-01
  • 4
    @picakhu, I think marwalix's point is that if you substitute y=0 into the inequality it becomes f(x) > f(x) which is a contradiction, so that there can be no such function...2011-08-01
  • 1
    you can try $y=0$ because $y$ does not necessarily belong to the domain of $f$. $x+0$ belongs to the domain whenever $x$ belongs to the domain and in the right hand term you only see $f(x)$. I read the question very carefully2011-08-01
  • 0
    @marwalix, I apologize, I misquoted the question. I will edit it. Thanks.2011-08-01
  • 0
    I know this isn't going to make a whole lot of sense, but I think this explains why such a function might not exist. Taking @marwalix's suggestion to its limit (!), write $\lim_{y \to 0} \frac{f(x+y)−f(x)}{y}≥ f(x)^2$. This gives us $f'(x)≥f(x)^2$. Solving this as an equation (ignoring the inequality), I get the solution $f(x)=\frac{1}{A−x}$ where $A$ is the constant of integration. It is clear that any such function must blow up at that $A$, so such a function cannot exist. I wonder if all this can be actually turned into a respectable proof. :-)2011-08-01
  • 1
    @Srivatsan: it seems all you can show is that if such f exists, it cannot be differentiable, right? We can tell that, at least within each integer unit, f is increasing: f(x+1)>f(x)(1+f(x)), so that f(x+1)-f(x)>$f(x)^2$>0, and f is then increasing in [n,n+1], for all n2011-08-01
  • 0
    @gary Well, I meant it as some handwavy explanation that can perhaps guide one towards the solution. (For instance, I interpret the comment as saying that one should expect the function to "blow up" at some large, but finite, real $A$.) I am not even sure if the above comment could be made to work directly even assuming differentiability, for instance.2011-08-01
  • 0
    At least it shows that no function can solve equality for $f'(x) \ge f(x)^2$, since for $x$ large enough your value of $f$ will be negative which is excluded.2011-08-01
  • 0
    Actually, since we can show that f(x) is monotone, by f(x+y)-f(x)>y$f(x)^2$>0 , it must be a.e. differentiable, and so a.e. continuous.2011-08-01

3 Answers 3

9

Here's a slightly different approach.

Suppose for contradiction that such an $f$ exists. In fact we will reach the same contradiction even if $f$ only satisfies $f(x+y) \geq f(x)(1+yf(x))$ for $x,y>0$.

Define a sequence $x_0 < x_1 < x_2 < \ldots$ via the recurrence $$x_0 := 2011 \text{ and } x_{n+1} := x_n + 1/f(x_n) \text{ for } n>0.$$

Making repeated use of the bound $f(x + 1/f(x)) \geq f(x)(1+f(x)/f(x)) = 2f(x)$, we find that $f(x_n) \geq 2^n f(x_0)$ for all $n$.

Taking the reciprocal of the latter inequality and summing a geometric series enables us to establish $$ x_n = x_0 + \sum_{i=0}^{n-1} (x_{i+1} - x_i) = x_0 + \sum_{i=0}^{n-1} 1/f(x_i) \leq x_0 + \sum_{i=0}^{n-1} 1/(2^i f(x_0)) < x_0 + 2/f(x_0) =: M.$$

But now, the sequence $x_0,x_1,\ldots$ is bounded (by $M$) while the sequence $f(x_0),f(x_1),\ldots$ diverges to infinity. It follows that there is some integer $N$ such that $f(x_N) > f(M)$ even though $x_N < M$. However, it has been observed such an $f$ must be monotone, so we have a contradiction.

  • 0
    Much more succinct, nice! (I wish I could upvote this; apparently I exhausted my votes for today :-()2011-08-01
  • 2
    @Mike, how does one come up with such an elegant solution? What is the motivation to try something like $x_{n+1}=x_n+1/f(x_n)$?2011-08-01
  • 0
    @picakhu Indeed. While my proof divides up the interval into $\epsilon$-sized intervals (so that one can approximate a finite Riemann sum), the idea of using adaptive increments $x \leftarrow x+1/f(x)$ has vastly improved the proof. It makes sense in retrospect, but it's a beautiful idea.2011-08-01
  • 0
    @Srivatsan, your proof seems to be utilizing some traditional ideas. i.e. consider small $\epsilon$, and then you bump into a recursion which you manage to play with and tweak. However to realize that one can find a series $x_0,x_1,...$ such that $f(x_i)\geq g(i) f(x_0)$ has to be the crux. How one gets to that eludes me. Also, note that its a nice telescope that Mike saw would be useful.2011-08-01
  • 0
    @picakhu: I'm glad you're happy with it! It occurred to me late at night, a mosquito had woken me up >:[, that the inequality could be interpreted as saying "increasing $x$ by $y$ results in multiplying the value of $f$ at $x$ by at least $1+yf(x)$". Then I just worked out what $y$ would be needed if one wished to at least double the value, and the rest was easy!2011-08-02
5

There exists no such $f$. My solution is formally not based on analysis, but it is inspired by my analysis "solution" in the comments. Hopefully there's no bug here :-).

Pick a sufficiently small $\epsilon > 0$. Then, for $n \in \mathbb N_{0}$, we have: $$ f(1+(n+1)\epsilon) - f(1+n\epsilon) > \epsilon f(1+ n\epsilon)^2. $$ I find it easier to work with a related sequence $u_n$ defined as $u_n = \epsilon f(1+n\epsilon)$. Plugging this in the above equation, we get the pleasant recursive inequality $u_{n+1} > u_n + u_n^2$, with the base condition $u_0 = \epsilon f(1)$.

Proposition. If $u_k \geq \eta$, then $u_{k + \lceil 1/\eta \rceil} \geq 2 \eta$.

Proof. We have $u_{k+1} - u_k > \eta^2$, $u_{k+2}-u_{k+1} > \eta^2$, and so on. (I am implicitly using the fact that $u_n$ is monotonic.) Adding all these $\lceil 1/\eta \rceil$ inequalities, we get the claim. $\Box$

Number of iterations before reaching $1$. For $\eta \leq 1$, we have the simplification $\lceil 1/\eta \rceil \leq 2/\eta$. Also I find it convenient to use the informal language "$u$-value at iteration $i$" to denote $u_i$. The above observation says that if the $u$-value at the current iteration is at least $\eta \leq 1$, then after at most $2/\eta$ iterations, the $u$-value becomes at least $2\eta$. Now, if $2\eta \leq 1$, then we stop; else, after an additional $2/(2\eta)$ iterations, the $u$-value is at least $2^2\eta$. Using this argument repeatedly, the number of iterations before the $u$-value becomes at least $1$ is at most $$ \frac{2}{\eta} + \frac{2}{2\eta} + \frac{2}{2^2 \eta} + \ldots $$ (this is actually a finite sum with $\approx \log(1/\eta)$ terms), which is smaller than the sum of the corresponding infinite geometric series, namely $4/\eta$.

Final contradiction. Now, since $u_0 = \epsilon f(1) \stackrel{def}{=} \eta$, we have $$ 1 \leq u_{\frac{4}{\epsilon f(1)}} = \epsilon \cdot f\left(1+\epsilon \cdot \frac{4}{\epsilon f(1)} \right) = \epsilon \cdot f\left(1+\frac{4}{f(1)} \right). $$ This is a contradiction since the $\epsilon > 0$ in the above argument was arbitrary, whereas the second factor is a fixed positive quantity depending only on $f$.

Note. While no $f$ exists satisfying the requirements of the problem, we can do better if we relax the domain to some interval of the form $(0,A)$. In this case, just the function $f(x) = \frac{1}{A-x}$ works.

  • 0
    Looks good, I was working on something similar. I think you're missing an $n$ in your first inequality?2011-08-01
  • 0
    @Mike corrected, thanks!2011-08-01
4

Another approach: Rewrite the inequality as $(*): f(x+y)-f(x)>yf(x)^2$. So we see that $f$ is a strictly increasing function. Fix $x$ and let $y\rightarrow+\infty$, we also see that $f$ is unbounded. Hence we can pick a number $a$ such that $f(a)>1$.

Now, for any $x\ge 0$, define $g(x)=f(x+a)-f(a)$. Clearly $g(0)=0$ and $g$ is strictly increasing. Also, it is easy to show that $(**): g(x+y) - g(x)>yg(x)^2$ whenever $x\ge 0$ and $y>0$. So, the inequalities for $f$ and $g$ are similar, except that we allow $x=0$ in the inequality for $g$. However, the function $g$ has a nicer property -- by putting $x=a$ in $(*)$, we have $g(y)>y$ for all $y>0$.

Let $b>0$ and $h=b/n$. By $(**)$, we get $g(ih)-g((i-1)h)>hg((i-1)h)^2$. Summing up for $i=1,2,...,n$ and let $h\rightarrow 0$, we see that $g(b) >\int_0^b g(u)^2 du$. As $g$ is an increasing function, the Riemann integral exists. Now, apply the inequality $g(u)>u$ to the integrand, we get $$g(b)>\int_0^b g(u)^2 du > \int_0^b u^2 du = \frac{b^3}{3}.$$ Apply this to the integrand, we get $$g(b)>\int_0^b g(u)^2 du > \int_0^b \left(\frac{u^3}{3}\right)^2 du = \frac{b^7}{3^2 \times 7}.$$ Continue in this manner, we get $$ g(b) > \frac{b^3}{3},\ \frac{b^7}{3^2 \times 7},\ \frac{b^{15}}{3^4 \times 7^2 \times 15}, ... $$ The $n$-th term of the above sequence is of order $O(b^{2^{n+1}}/2^{n^3})$. Therefore, when $b>1$, the value of $g(b)$ will eventually blow up, i.e. $f(a+b)$ does not exist.

The final step above suggests that when the domain of $f$ is restricted to a small neighbourhood of zero, $g$ may not blow up [edit: or even the neighbourhood is so small that $a$ and hence $g$ are not defined] and hence a solution for $f$ can exist. For example, if $f$ is only defined on some $(0, b)$, we may take $f(x)=e^{\alpha x}$ as a solution provided that $b$ is small and $\alpha > e^{\alpha b}$.

  • 1
    A nice proof! Besides you seem to have uncovered another $f(x)$ that works for $(0, b)$ for small $b$. Wonder where this function is hiding in the other proofs.2011-08-01
  • 0
    I'd noticed something somewhat relevant. If $f$ satisfies the given condition, then for $x,y >0$ and $n$ a positive integer we have $f(x+y)/f(x)=\prod_{k=0}^{n-1}f(x+\frac{(k+1)y}{n})/f(x+\frac{ky}{n})>\prod_{i=0}^{n-1} (1+\frac{y}{n}f(x+\frac{ky}{n}))\geq(1+\frac{yf(x)}{n})^n$. Taking $n \to \infty$ gives $f(x+y)\geq f(x)e^{f(x)y}$.2011-08-02
  • 0
    Supposing $f$ is continuous at $y$ and taking $x\to 0$ from above shows $f(y) \geq A e^{Ay}$ where $A := \lim_{x \to 0^+} f(x)$. We can drop the assumption that $f$ is continuous at $y$ by considering an increasing sequence of points $y_n$ at which $f$ is continuous with $y_n \to y$. This is of course possible since $f$ is monotone and so has at most countably many discontinuities. The conclusion is that $f(x) \geq Ae^{Ax}$ holds for all $x>0$ so the "partial examples" in your answer are minimal in some sense.2011-08-02