4
$\begingroup$

How could we prove that if the sum of the squares of two numbers is a constant, then the sum of the numbers would have its maximum value when the numbers are equal?

This result is also true for more than two numbers. I tested the results by taking various values and brute-force-ish approach. However I am interested to know a formal proof of the same.

This is probably an mild extention to this problem. I encountered this result while searching for a easy solution for the same.

8 Answers 8

12

The Cauchy-Schwarz inequality gives $\sum_{i=1}^nx_i \leq \sqrt{n\sum_{i=1}^n x_i^2}.$ The upper bound is achieved when the numbers $x_i$ are positive and equal.

  • 0
    $\quad$Dully noted :-)2011-08-23
11

This is a special case of the "Purkiss Principle". See the following excellent exposition by Wm. Waterhouse Do Symmetric Problems Have Symmetric Solutions? I recall thinking that this was one of the most beautiful Monthly articles that I ever read as an undergraduate. Apparently others felt similarly since it won a prestigious Lester R. Ford award for expository excellence.

  • 0
    There's a nice summary of Waterhouse's article as the accepted answer to my question "[When does symmetry in an optimization problem imply all variables are equal at optimality?](http://mathoverflow.net/questions/58721/when-does-symmetry-in-an-optimization-problem-imply-that-all-variables-are-equal)"2011-08-26
5

Here's the pedestrian, Calculus I method:

Let $x$ and $y$ be the two numbers. The condition "the sum of the squares is a constant" means that there is a fixed number $c$ such that $x^2+y^2=c$. That means that if you know one of the two numbers, say, $x$, then you can figure out the absolute value of the other one: $y^2 = c-x^2$, so $|y|=\sqrt{c-x^2}$.

Now, since you want to find the maximum of the sum, then you can restrict yourself to the positive values of $x$ and $y$ (the condition on the sum of squares doesn't restrict the sign). So we may assume $x\geq 0$, $y\geq 0$, and $y=\sqrt{c-x^2}$. And now you want to find the maximum of $x+y = x+\sqrt{c-x^2}$.

Thus, this reduces to finding the maximum value of $S(x) = x+\sqrt{c-x^2}$ on the interval $[0,\sqrt{c}]$.

By the Extreme Value Theorem, the maximum will be achieved either at an endpoint or at a critical point of $S(x)$. At $0$ we get $S(0) = \sqrt{c}$; at $\sqrt{c}$ we get $S(\sqrt{c}) = \sqrt{c}$. As for critical points, S'(x) = 1 - \frac{x}{\sqrt{c-x^2}}. The critical points are $x=\sqrt{c}$ (where S'(x) is not defined), and the point where $x=\sqrt{c-x^2}$, or $2x^2=c$; hence $x^2=\frac{c}{2}$ (which means $y^2=\frac{c}{2}$ as well). At $x=\sqrt{\frac{c}{2}}$, $S(x) = 2\sqrt{\frac{c}{2}} = \sqrt{2c}$. This is clearly the maximum.

For the problem with $k$ variables, $k\gt 2$, the "pedestrian method" involves Multivariable calculus and Lagrange multipliers.

3

My answer in your other question still applies... and in any number of variables.

You can use the method of Lagrange multipliers to maximize/minimize $f(a_1,\ldots,a_n)=a_1+\cdots+a_n$ given $g(a_1,\ldots,a_n)=a_1^2+\cdots+a_n^2=k$, where $k$ is a constant. We need the gradient of $f$ to be a multiple of the gradient of $g$, i.e., $1=\lambda 2a_1,\quad 1=\lambda 2a_2,\ \ldots \quad 1=\lambda 2a_n,$ where $\lambda$ is some real number. Hence: $\lambda = \frac{1}{2a_1}=\frac{1}{2a_2}=\cdots=\frac{1}{2a_n}$ and we must have $a_1=\cdots=a_n=a$. This yields $g(a,\ldots,a)=na^2=k$, so that $a_1=\cdots=a_n=a=\sqrt{k/n}$ or $a_1=\cdots=a_n=-\sqrt{k/n}$. We clearly have a maximum at $a_1=\cdots=a_n=\sqrt{k/n}$ and the maximum value of $f$ is $n\sqrt{k/n}$.

3

You can see this quite readily in the theorem of Thales: An angle inscribed in a semicircle is a right angle.

Right angle inscribed in a semicircle

The constant in your problem is the square of the hypotenuse, and the sum of the other two sides is maximized when the altitude of the inscribed triangle is maximized.

2

As was already mentioned in my answer to this other post of yours (but maybe you skipped it), there is an algebraic proof. Let us start with two numbers $a$ and $b$ and use the relation $ (a+b)^2=2(a^2+b^2)-(a-b)^2. $ This implies that the square of the sum is at most twice the sum of the squares and also that, if one fixes the sum of the squares, the sum $a+b$ achieves this maximum value if and only if $(a-b)^2$ is zero, that is, if and only if $a=b$.

The argument can be adapted to $n$ numbers, using the relation $ (a_1+\cdots+a_n)^2=n(a_1^2+\cdots+a_n^2)-\sum\limits_{i\ne j}(a_i-a_j)^2. $ The conclusion is the same: the square of the sum is at most $n$ times the sum of the squares and, if one fixes the sum of the squares, the sum $a_1+\cdots+a_n$ achieves this maximum value if and only if the sum of the $(a_i-a_j)^2$ is zero, that is, if and only if $a_i=a_j$ for every $i\ne j$.

1

$x^2 \geq 0$
set $x = a - b$
$ \begin{align} & \Rightarrow (a-b)^2 \geq 0 \textrm{(*)} \\ & \Leftrightarrow a^2 + b^2 \geq 2ab \\ & \Leftrightarrow (a^2 + b^2) + (a^2 + b^2) \geq (a+b)^2 \\ & a^2 + b^2 = c (\textrm{constant}) \\ & \Leftrightarrow 2c \geq (a+b)^2 \\ & \Rightarrow a+b \leq \sqrt{2c} \end{align} $

so $a+b$ get max when $(a-b)^2 = 0$ mean $a = b$

0

We have (x+y)=c where x indicates a square, y a square, and c a constant. Now take max(x, y).

Case 1: If max(x, y)=x, then rewrite y as (x*(y/x)) (just one time). So, the sum of the squares (x+y) becomes (x+(x*(y/x)))=(x*(1+(1*(y/x))))=(x*(1+(y/x))). Since x>=y by hypothesis, and y and x both equal positive integers, (y/x) belongs to (0, 1]. So, (1+(y/x)) belongs to (1, 2]. For (x*z) with z belonging (1, 2], and x a positive integer, (x*z) has greatest value when z=2, so (x*(1+(y/x))) has greatest value when (1+(y/x))=2, which implies x=y.

The other case works similarly with x rewritten as (y*(x/y)).