5
$\begingroup$

I was playing around with some integrals and decided that $$ a\int_a^b t^{n−2}\alpha(t) dt \le \int_a^b t^{n−1}\alpha(t) dt $$ for $a,b>0$, $\alpha \ge 0$, $\alpha$ nice enough so that the above expression is defined, should be true.

The idea is that the little $a$ on the left hand side compensates when $t<1$ (i.e. $a<1$). One can produce a cases `brute force' style proof from this observation.

But, I wanted to ask the following: is the above inequality sharp? Can the power of $t$ on the LHS be played with? Can the constant $a$ be improved (perhaps depending on additional properties of $\alpha$)?

And most importantly: does this little guy have a name?

  • 2
    Doesn't this follow from $\int fg\ge\min(f)\int g$?2011-05-23
  • 0
    I might be overlooking something, but doesn't the above inequality follow from $a\le t$ for $t\in \langle a,b \rangle$?2011-05-23
  • 0
    @Gerry, @Martin Yes... the proof is not really in question :). That's kind of what I was hinting at above. I'm more interested in 'sharpness' under various assumptions on $\alpha$.2011-05-23
  • 0
    @Martin Ping! (extra chars)2011-05-23
  • 0
    So your question can be "approximately" rephrased as: What can be said about $F(n)=\inf\frac{\int_a^b t^{n−1}\alpha(t) dt}{\int_a^b t^{n−2}\alpha(t) dt}$, where the infimum is taken over some class of functions $\alpha$. (Depending on the chosen class of $\alpha$'s). We can say for sure that $F(n)\ge a$. Is this, more or less, about what you have in mind?2011-05-23
  • 0
    BTW my guess is that for almost all reasonable classes of $\alpha$'s the infimum $F(n)$ will be strictly greater than $\alpha$. Note that as soon as $\alpha(t)>0$ holds for some subinterval $(t-\varepsilon,t+\varepsilon)$ of $[a,b]$, the inequality will be strict.2011-05-23
  • 0
    @Martin Exactly!2011-05-23
  • 0
    I guess I should have added calculus of variations to the tag list.2011-05-23

3 Answers 3

3

Expanding and correcting my comment (although I am not sure to which extent this answers your question).

[Maybe this is closer to a comment as to an answer, but it is too long. More or less, this is still an attempt to get a more precise formulation of the question by exhibiting some special cases.]

First let us have a look at the situation when a single function $\alpha$ is considered.

If the function $\alpha$ has the property that there exists a subinterval $(x,y)\subseteq \langle a,b \rangle$ such that $\alpha$ is bounded from zero on this subinterval, i.e., there exists $\varepsilon>0$ such that $\alpha(t)>\varepsilon$ for each $t\in(x,y)$, then the above inequality is strict ($a$ is not the best possible constant).

$$\int_a^b t^{n−1}\alpha(t) dt-a\int_a^b t^{n−2}\alpha(t) dt= \int_a^b (t-a)t^{n−2}\alpha(t) dt \ge $$

$$\int_x^y (t-a)t^{n−2}\alpha(t) dt \ge \varepsilon \int_x^y (t-a)t^{n−2} dt >0.$$

On the other hand, if we consider class of functions such that for each $\varepsilon>0$ there is some $\alpha$ with the property that $\alpha(t)\le 1$ for each $t$ and $\alpha(t)$ is zero for $t>a+\varepsilon$, then

$$\int_a^b (t-a)t^{n−2}\alpha(t) dt \le \int_a^{a+\varepsilon} t^{n-1}-at^{n-2} =$$

$$\left[\frac{t^n}n-a\frac{t^{n-1}}{n-1} \right]_a^{a+\varepsilon}= \frac{(a+\varepsilon)^n-a^n}n - a\frac{(a+\varepsilon)^{n-1}-a^{n-1}}{n-1}.$$

Both fractions in the last expression tend to 0 as $\varepsilon\to 0^+$, so in this case, $a$ is the best possible constant.

(I believe that similar reasoning would work not only for $\alpha(t)=0$ for $t>a+\varepsilon$, but also if it is sufficiently small for $t>a+\varepsilon$.)

A class of functions, which I did not address in this post and which might be interesting could be the class of continuous functions fulfilling $\int_a^b \alpha(t) dt=1$.

EDIT: Now I realized that the above approach works also for the functions fulfilling $\int_a^b \alpha(t) dt=1$.

Notice that for the step function

$$\alpha(t)= \begin{cases} \frac1\varepsilon & t\in [a,a+\varepsilon], \\ 0 & \text{otherwise}. \end{cases} $$

we have $\int_a^b \alpha(t) dt=1$ and

$$\int_a^b (t-a)t^{n-2}\alpha(t) dt = \frac1\varepsilon \int_a^{a+\varepsilon} t^{n-1}-at^{n-2} =$$

$$\frac1\varepsilon \left[\frac{t^n}n-a\frac{t^{n-1}}{n-1} \right]_a^{a+\varepsilon}= \frac 1n \frac{(a+\varepsilon)^n-a^n}\varepsilon - a \frac1{n-1} \frac{(a+\varepsilon)^{n-1}-a^{n-1}}{\varepsilon}.$$

If we notice that $\lim\limits_{\varepsilon\to0^+} \frac{(a+\varepsilon)^n-a^n}\varepsilon = na^{n-1}$ (derivative of the function $x^n$) then again the last expression converges to zero again.

Step functions can be approximated by continuous functions, so with some effort the function $\alpha$ can be made continuous.

  • 0
    Interesting. I wonder about the class $\alpha\in H^2$ say with $\int |\alpha''|^2 > 1$.2011-05-23
3

Another approach and somehow an answer to your second question: This can be shown by an application of the reverse Hölder inequality in the limiting case.

For $f,g$ with $g\neq 0$ almost everywhere we have for (if the right hand side exists) $$\|f\|_1 = \||fg|\,|g|^{-1}|\|_1 \leq \|fg\|_1\||g|^{-1}\|_\infty = \|fg\|_1\inf|g|$$ (a more general case is treated here).

However, I do not know about sharpness or optimal constants here, although it seems that there is literature on this..

  • 0
    Yes I am vaguely aware of the families of reverse H\"older-type inequalities. I thought they might be overkill here, but you could very well be on to something---the extremals are well-studied and should yield *something* about best constants.2011-05-23
2

I imagine you would want the inequality to hold for all $b>a$. Suppose $f(n)$ is a candidate to replace the $n-2$ in the LHS, and $A=A(a,b,\alpha,n)^{\ddagger}$ is a candidate to replace the $a$ out front: $$\mbox A\leq\frac{\int_a^b t^{n-1}\alpha(t)\,dt}{\int_a^b t^{f(n)}\alpha(t)\,dt}.$$ If you want the inequality to hold for all $b>a$, we can take the limit as $b\rightarrow a^+$ and use L'Hopital's Rule:

$$\mbox A\leq\lim_{b\rightarrow a^+}\frac{b^{n-1}\alpha(b)}{b^{f(n)}\alpha(b)}$$

$$\mbox A\leq \lim_{b\rightarrow a^+}b^{n-1-f(n)}=a^{n-1-f(n)}$$

So the best scenario would be to have $A=a^{n-1-f(n)}$, since we'd like to improve on the LHS constant by making it as large as possible. Now if the inequality holds with this $A$ replacing $a$, then we can rearrange the inequality to read $$\int_a^b\Big(t^{n-1}-a^{n-1-f(n)}t^{f(n)}\Big)\alpha(t)\,dt\geq0.$$ This is supposed to hold for all $[a,b]$ with $a>0$. This implies that for all $a$, the quantity in the big parentheses is non-negative for $t$ within some $\epsilon$ above $a$, or else we could find an $[a,b]$ that would make the whole integral negative. So for all $a>0$, for all $t$ slightly above $a$, $$t^{n-1}-a^{n-1-f(n)}t^{f(n)}\geq0$$ $$\Longrightarrow t^{n-1-f(n)}-a^{n-1-f(n)}\geq0$$ $$\Longrightarrow n-1-f(n)\geq0$$ $$\Longrightarrow f(n)\leq n-1$$

At this point we'd like to make $f(n)$ as big as possible$^{\dagger}$, since that makes the exponent in the LHS big. But then a big $f(n)$ makes for a small $A$! So there is trade off, and it depends what is more important - having a large $A$ or having a large $f(n)$. Whatever $f(n)$ is (as long as its $\leq n-1$), the inequality $$a^{n-1-f(n)}\int_a^b t^{f(n)}\alpha(t)\,dt\leq\int_a^b t^{n-1}\alpha(t)\,dt$$ holds for the same reason the original inequality holds: $$\int_a^b t^{n-1}\alpha(t)\,dt=\int_a^b t^{n-1-f(n)}t^{f(n)}\alpha(t)\,dt\geq a^{n-1-f(n)}\int_a^b t^{f(n)}\alpha(t)\,dt$$

So if there are no mistakes here, the following are each correct, sharp (in a balanced sense) inequalities:

$$a^{2}\int_a^b t^{n-3}\alpha(t)\,dt\leq\int_a^b t^{n-1}\alpha(t)\,dt$$

$$a^{n^2}\int_a^b t^{n-1-n^2}\alpha(t)\,dt\leq\int_a^b t^{n-1}\alpha(t)\,dt$$

$$a^{\ln(n)}\int_a^b t^{n-1-\ln(n)}\alpha(t)\,dt\leq\int_a^b t^{n-1}\alpha(t)\,dt$$

$$a\int_a^b t^{n-2}\alpha(t)\,dt\leq\int_a^b t^{n-1}\alpha(t)\,dt$$

CORRECTION

At the $^{\dagger}$, the desire to make $f(n)$ big really only applies if $a\geq1$, since that would make the integral larger. If the entire $[a,b]$ is contained within $[0,1]$, then it would be more desirable to make $f(n)$ small. However, since we are now assuming that $a<1$, that would still make $A$ small (still not desirable). Again there is trade off, and the above inequalities are still true and (balanced) sharp. Lastly in the case where $[a,b]$ straddles $1$, we could break the integral into two parts, and the inequalities would still hold.

At the $^{\ddagger}$, the argument that follows only holds for $A=A(a,\alpha,n)$, not $A=A(a,b,\alpha,n)$. So any conclusions still leave open the possibility for improvement when $A$ depends partially on $b$.

  • 0
    The inequality actually is true (trivially, but still) for $a \ge b$. I like your little discussion, but have to disagree about the last few inequalities being "sharp". The sense I have in mind is that for certain classes of $\alpha$ the constant $a^p$ on the left hand side can be replaced by a larger constant. Your reasoning about the "best possible $f$" uses the original naive proof, so it won't detect any finer information about $\alpha$. Thanks for the answer!2011-05-23
  • 0
    @Glen: I didn't even think $b$b$, such that $b>a$", as opposed to $b$ bounded above $a$ by some amount (like 1, say). My argument requires the inequality to hold for $b$ within $\epsilon$ of $a$, so that the limit computation is valid. Comment response 2 to follow $\downarrow$2011-05-23
  • 0
    @Glen: I miss things frequently, so I wouldn't be surprised to find out that is happening here. But regardless of the details of the function $\alpha$, as long as it's a positive function (which is stronger than what you asked for: $\alpha(t)\geq0$, I know), I feel like the first part of the argument establishes that the constant out front could be no bigger than $a^{n-1-f(n)}$. The only way that $\alpha$'s properties could enhance this is if $\alpha$ is such that the L'Hopital calculation is invalid - like if it keeps taking value 0. What am I missing?2011-05-23
  • 0
    Aha. One thing that I am missing: I started the argument proposing that $A$ could depend on $b$ too somehow, but then I promptly forgot about that when taking the limit as $b\rightarrow a^+$. So I'm open to better constants, but only if they involve both $a$ and $b$.2011-05-23
  • 0
    Right. That's one point. (Well, actually two points.) The other is that we may be interested in an inequality which holds only on a fixed interval. Your idea still works, in a way, since then we can continue to perform a "limiting" argument through scaling. Then it will depend on the scaling of $A$ and $\alpha$. This is kind of what I was getting at.2011-05-24