The $3$-norm on $\mathbb R^n$ is defined as
$\|x\|_3 := \sqrt[3]{|x_1|^3+\dots+|x_n|^3}$
The natural matrix norm it induces on $\mathbb R^{n \times n}$ is
$\|A\|_3 = \max_{\|x\|_3=1} \|Ax\|_3$
For $y \in \mathbb R$, let
$A_y = \left(\begin{matrix} 1 & y \\ 0 & 1 \end{matrix}\right)$
Give a table showing $\|A_y\|_3$ for $y = 1, \dots, 9$.
Matrix $3$-norm
-
0It's an optimization problem with constraints... Hint: You can replace the constraint $\|x\|_3=1$ by $\|x\|_3\leq 1$ which makes it a convex constraint... – 2012-04-28
2 Answers
Well, I've gone through it. I would say the reason this is given to you in $\mathbb R^2$ is for you to see some actual pictures. First, I like $x,y$ for the coordinates, let us call the matrix $A_w$ for $w = 1, 2, \ldots,9.$ The "superellipse" given by $ |x|^3 + |y|^3 = 1, $ appearance discussed HERE, can be parametrized in the first quadrant by $ x = (\cos t)^{2/3}, \; \; y = (\sin t)^{2/3}, \; \; 0 \leq t \leq \pi/2. $ In the second quadrant, $ x = - |\cos t|^{2/3}, \; \; y = (\sin t)^{2/3}, \; \; \pi / 2 \leq t \leq \pi. $ If you have (or write) a function that is traditionally called "signum," where signum of a real number is $1$ if the number is positive, $-1$ if the number is negative, and $0$ if the number is itself $0,$ you can write the parametrization for the entire superellipse. Anyway, the matrix $A_w$ takes such a column vector with entries $x,y$ to $(x+wy,y).$ You can simply have the computer tell you the value of the 3-norm at these points $(x+wy,y)$ for a fairly fine division of $t.$ Once you have the values of $t$ where the 3-norm is largest, restrict to that region and subdivide the $t$ values 10 times smaller. The "sheared" superellipse is $ x = (\cos t)^{2/3} + w (\sin t)^{2/3}, \; \; y = (\sin t)^{2/3}, \; \; 0 \leq t \leq \pi/2. $ The linear transformation you were given is called a "shear" from physics traditions.
Meanwhile, it is of course true that this can be done with Lagrange multipliers, but the calculation is not elegant and I do not think that is what the instructor wants. Program this, output some pictures, keep subdividing $t$ to get better accuracy, learn something gritty and hands-on.
We have $A_y x = \begin{pmatrix} x_1 + y x_2 \\ x_2 \end{pmatrix} .$ You want to maximize $(x_1 + yx_2)^3 + x_2^3$ with the constraint $x_1^3 + x_2^3 = 1.$ In other words, you want to find the saddle point of the Lagrangian $L(x,\lambda) = (x_1 + yx_2)^3 + x_2^3 + \lambda (x_1^3 + x_2^3 - 1).$ This is very simple to do in any CAS or even by hand.
-
0Didn't you forget to take the absolute values before cubing? – 2018-05-11