3
$\begingroup$

Suppose we have a function, $\phi:[a,b]\to\mathbb{R}_{+}$. I am trying to prove that the function:

$g(\alpha)=\int^{b}_{a}(x-\alpha)^{2}\phi(x)dx$

attains its minimum value on $(a,b)$, and find the point in which it reaches that value (I believe it can be eventually expressed in terms of the function $\phi$).

The results I've obtained so far are not very promising. First off, the only way that I know to prove the thesis is by finding an $\alpha_{0}$, such that $g^{\prime}(\alpha_{0})=0$ and then show that $g^{\prime\prime}(\alpha_{0})\geq{0}$.

As for the first part, let's define a bivariate function: $h(\alpha,x)=(x-\alpha)^{2}\phi(x)$. We have:

$g^{\prime}(\alpha)=\int^{b}_{a}\frac{\partial}{\partial\alpha}h(\alpha,x)\,dx=-2\int^{b}_{a}x\phi(x)\,dx+2\alpha\int^{b}_{a}\phi(x)\,dx.$

I have no idea how to deal with the expression $\int^{b}_{a}x\phi(x)\,dx$. Is there a way we can somehow evaluate it and transform into a more elementary form? If not, then by equating the result we obtained to $0$, we arrive at:

$\alpha=\frac{\int^{b}_{a}x\phi(x)\,dx}{\int^{b}_{a}\phi(x)\,dx}$

How can we interpret the above expression, or at least prove that it belongs to the interval $(a,b)$? I would be very thankful on some ideas as to where I should head with this.

EDIT: we also assume that $\phi$ is continuous.

  • 0
    Can't you just work with the sign of the derivative? $\phi$ is >0 on the whole interval, that means its integral is positive...2012-09-29

4 Answers 4

1

Let $h(\alpha)=g'(\alpha)$. Then $h$ is continuous everywhere (in fact it is a linear function) with $h(a)=2\int_a^b(a-x)\phi(x)dx<0$ and $h(b)=2\int_a^b(b-x)\phi(x)dx>0$ by positivity of $\phi$. It follows that there is a zero of $g'$ in $(a,b)$, which the second derivative test shows to be a minimum of $g$. By linearity, $g'$ has a unique zero, the location of which (given in terms of integrals of $\phi$) you have already determined.

2

If the function $\phi$ is continuous, $\alpha$ could be interpreted, from a mechanical point of view, as the center of mass abscissa of a material line (the segment $[a,b]$) of linear mass density $\phi(x)$.

Another interpretation of $\alpha$ is, being $\phi$ positive, and if it is left continuous, as the mean value of the probability density distribution

$ \psi(x)=\frac{1}{N}\phi(x),\qquad N=\int_a^b\phi(x)dx. $

Obviously, both interpretations lead to $a\leq \alpha\leq b$, that can be easily proved directly.

  • 0
    Alright, I see it now. Thanks!2012-09-29
2

Since $\phi$ continuos, $c:=\min\{\phi(x)\mid a\le x\le b\}$ is assumed at some point in $[a,b]$. Since $\phi$ is strictly positive, we conclude $c>0$ , we have that $(x-a)\phi(x)\ge c(x-a)$ and $(b-x)\phi(x)\ge c(b-x)$ for all $x\in[a,b]$, hence $\int_a^b x \phi(x) dx\ge \int_a^b a \phi(x) dx+\int_a^b c(x-a)dx=a\int_a^b\phi(x)dx+\frac c2(b-a)^2.$ Similarly, $\int_a^b x \phi(x) dx\le \int_a^b b \phi(x) dx-\int_a^b c(b-x)dx=a\int_a^b\phi(x)dx-\frac c2(b-a)^2.$ Because $\frac c2(b-a)^2>0$ and $\int_a^b\phi(x)dx>0$ we conclude

$ a< \frac{\int_a^b x \phi(x) dx}{\int_a^b \phi(x) dx}< b.$

Remark: Even if we only assume $\phi(x)\ge0$ for $x\in[a,b]$ and only $\phi(x_0)\ne0$ for some $x_0\in [a,b]$, continuity of $\phi$ allows us to find a subinterval $[a',b']$ around $x_0$ where $\phi$ is strictly bigger than the positive number $c':=\frac12\phi(x_0)$. Then we still have strict inequalities because we may replace the expression $\frac c2(b-a)^2$ with $\frac {c'}2(b'-a')^2$ in the above argument.


Once you have established $a<\alpha this way, you of course have immediately that $g''(\alpha)=2\int_a^b\phi(x)dx>0$, i.e. $g(x)$ takes has a local minimum at $x=\alpha$. This is also the global minimum for $[a,b]$ because a minimum at the boundary (i.e. at $x=a$ or $x=b$) would require a local maximum inbetween.

  • 0
    Thank you very much for a thorough explanation!2012-09-29
1

It is widely known that for random variables $X$ for which the expected value $E|X|$ is finite, the value of $\alpha$ that minimizes $E((X-\alpha)^2)$ is $\alpha=E(X)$.

So in case $\varphi$ is a probability density function, that answers the question. Where $\varphi$ is a non-negative function whose integral is finite, just divide both sides of the inequality by that integral, and again, that answers the question.

Here's a proof: Consider $ \alpha\mapsto E((X-\alpha)^2) = E(X^2) -2\alpha E(X) + \alpha^2. $ No you're just minimizing a quadratic polynomial in $\alpha$.

  • 0
    Yes, I see it. Tha$n$ks!2012-09-29