Consider a vector $x \in \mathbb R_+^n$ and $p,q \in \mathbb R$ such that $1 . We fix $\sum \limits_{i=1}^{n}|x_i| = 1$ and $ \left(\sum \limits_{i=1}^{n}|x_i|^p \right)^\frac{1}{p} = \theta < 1$. In other words, $x$ is a probability mass function, with a fixed $p$-norm. The question is to find that $x$ which maximises the $q$-norm i.e. maximises $\left( \sum \limits_{i=1}^{n}|x_i|^q \right)^\frac{1}{q}$. Some observations: If $q = \infty$, the problem reduces to finding that $x$ which has the largest value for $\sup \limits_i \; x(i)$ leading to the distribution which looks like $\left( \alpha, \frac{1-\alpha}{n-1}, \frac{1-\alpha}{n-1}, \ldots, \frac{1-\alpha}{n-1} \right)$. Using perturbation arguments, it can be shown that $x$ can take at most $2$ distinct values in its co-ordinates (or one of the co-ordinates is $0$, which just reduces the dimension of the problem). My guess is that the $x$ which works for $q = \infty$ should also work for all $q > p$.
Maximise $L^q$ norm of a vector, for fixed $L^1$ and fixed $L^p$ norms
2
$\begingroup$
inequality
optimization
-
0With only two distinct nonzero $x$ values, you're reduced to a finite number of cases: if there are to be $k$ nonzero (and wlog positive) values of $x$ and $m$ of $y$, you want to have $k x + m y = 1$ and $k x^p + m y^p = \theta$, and there are at most two solutions of that system. – 2011-12-16
-
0I agree. The picture seems to be that for each $q$, one of these finite cases becomes the optimal, until such a point when $q$ is large enough and the optimal for $q = \infty$ takes over. Could it be that the optimal for $q = \infty$ is optimal throughout? – 2011-12-16
-
0I'm still stuck at the same point.. but atleast matlab calculations haven't yielded any counter-examples (i.e the optimal at $q = \infty$ is optimal throughout). It seems cute enough to permit an analytical solution. – 2011-12-17
1 Answers
1
After searching around and asking on mathoverflow, it turns out there is a theorem specifically tailored for this type of norm-constrained problems. Its called the Equal variable theorem and is proved here: http://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf