9
$\begingroup$

I want to minimize the function $f(x) = e^{ax^2} + e^{b(1-x)}$ with respect to $x$ (where $a$ and $b$ are constants), subject to the constraint that $0 \leq x \leq 1$.

I know that $x = 0$ and $x = 1$ are both critical points, but I am interested in finding the potential minimum which lies between the boundary values.

Surely the first step is to set the derivative equal to $0$, yielding 0 = f'(x) = 2axe^{ax^2} - be^{b(1-x)} The minimizing $x$ must solve $2axe^{ax^2} = be^{b(1-x)}$, but I'm not sure how to "use" the constraint $0 \leq x \leq 1$ to do so...

Note: In my specific problem, I have $a = -\frac{N}{16}$ and $b = -\frac{N}{4} (\frac{1}{2} - p)^2$, where $N$ is a constant positive integer, and $p$ is a constant with $0 \leq p < \frac{1}{2}$. Ideally, we will be able to express a minimizing $x$ in terms of $N$ and $p$.

  • 0
    What am I supposed to do with the bounty if no one has yet answered the question? `:)`2012-04-08

2 Answers 2

6

If $b \neq 0$ and $2ax \geq b$ for all $0 \leq x \leq 1$ then $ax^2-bx+b$ is increasing on $0 \leq x \leq 1$ and $\frac{2a}{b} e^a \leq 1$. These statements imply that

$ \frac{2ax}{b}e^{ax^2 - bx + b} < 1 $

for $0 \leq x < 1$, which means that f'(x) \neq 0 there. Thus you may conclude that your minimum occurs at one of the endpoints.

What restrictions are there on $a$ and $b$? Perhaps similar arguments exist for other specific cases.

Edit: As I will probably not have enough time to work on this in the next 7 days I will sketch here what I was thinking about in the hopes it will benefit others. My goal was to put a lower bound on the minimum of the function $f(x) = e^{ax^2} + e^{b(1-x)}$ using the constraints for $a$ and $b$ in the note at the end of the question.

For $y > 0$, replacing

$ e^{ax^2} \hspace{0.5cm} \text{with} \hspace{0.5cm} \frac{1}{y+e^{-ax^2}} $

or

$ e^{b(1-x)} \hspace{0.5cm} \text{with} \hspace{0.5cm} \frac{1}{y+e^{-b(1-x)}} $

in $f(x)$ will yield a function $f_y(x)$ which is less than $f(x)$. Depending on which choice was made, f_y'(x) will be either nonnegative or nonpositive for $y$ large enough.

To determine how large it needs to be, we start by solving f_y'(x) = 0 for $y$, which I believe will always reduce to a quadratic equation. These two roots of f_y'(x) = 0 are functions of $x$ which I think can always be made less than a sum of two exponentials. This sum will be bounded (since x is bounded), so we would like to find an upper bound (to elminate the $x$ dependence). Here I would have tried to use the inequality $e^x \leq 1/(1-x)$, which holds for $x<1$, to bound this sum of exponentials above by a sum of rational functions. Thus we end up wanting to solve something like

$ y > \frac{1}{1-p(x)} + \frac{1}{1-q(x)} $

where $p(x)$ and $q(x)$ are quartic (or less) and $0 \leq x \leq 1$. An upper bound of the right-hand side is theoretically possible to write down. Taking $y$ larger than this will make $f_y(x)$ monotonic, and hance the minimum of $f(x)$ will be greater than the lowest endpoint of $f_y(x)$, whose height can (theoretically) be calculated.

  • 2
    @jamaicanworm There is definitely some tricky stuff happening there. I am able to bound $f(x)$ from above with those constants but not explicitly maximize or minimize it so far.2012-04-01
3

Alway plug in the boundary values. Optima occur among critical points; the oundary values are, by definition critical. Here $f(0)= e^b$, while $f(1)=e^a.$ To find your other critical points, try dividing one side by the other.

  • 3
    I would try completing the square in the exponent, so you want to solve $(py+q)\exp(y^2) = 1$. Still a mess, but a simpler one. Maybe these is a generalization of the W function with $x \exp(x^2) = 1$.2012-04-01