As I've mentioned in this answer to an MO question, Nick Higham has made note of a numerical method he attributes to Boyd for estimating the $p$-norm of a matrix, which is essentially an approximate maximization scheme similar in flavor to the usual power method for computing the dominant eigenvalue; Higham's book has a few details, his article a few more, and then see this MATLAB implementation of the algorithm.
Code? Why sure! Here's a Mathematica translation of the MATLAB routine pnorm()
:
dualVector[vec_?VectorQ, p_] := Module[{q = If[p == 1, Infinity, 1/(1 - 1/p)], n = Length[vec]}, If[Norm[vec, Infinity] == 0, Return[vec]]; Switch[p, 1, 2 UnitStep[vec] - 1, Infinity, (Sign[Extract[vec, #]] UnitVector[n, #]) &[ First[Flatten[Position[Abs[vec], Max[Abs[vec]]]]]], _, Normalize[(2 UnitStep[vec] - 1) Abs[ Normalize[vec, Norm[#, Infinity] &]]^(p - 1), Norm[#, q] &]]] /; p >= 1 Options[pnorm] = {"Samples" -> 9, Tolerance -> Automatic}; pnorm[mat_?MatrixQ, p_, opts___] := Module[{q = If[p == 1, Infinity, 1/(1 - 1/p)], m, n, A, tol, sm, x, y, c, s, W, fo, f, c1, s1, est, eo, z, k, th}, {m, n} = Dimensions[mat]; A = If[Precision[mat] === Infinity, N[mat], mat]; {sm, tol} = {"Samples", Tolerance} /. {opts} /. Options[pnorm]; If[tol === Automatic, tol = 10^(-Precision[A]/3)]; y = Table[0, {m}]; x = Table[0, {n}]; Do[ If[k == 1, {c, s} = {1, 0}, W = Transpose[{A[[All, k]], y}]; fo = 0; Do[ {c1, s1} = Normalize[Through[{Cos, Sin}[th]], Norm[#, p] &]; f = Norm[W.{c1, s1}, p]; If[f > fo, fo = f; {c, s} = {c1, s1}]; , {th, 0, Pi, Pi/(sm - 1)}] ]; x[[k]] = c; y = c A[[All, k]] + s y; If[k > 1, x = Join[s Take[x, k - 1], Drop[x, k - 1]]]; , {k, n}]; est = Norm[y, p]; For[k = 1, True, k++, y = A.x; eo = est; est = Norm[y, p]; z = Transpose[A].dualVector[y, p]; If[(Norm[z, q] < z.x || Abs[est - eo] <= tol est) && k > 1, Break[]]; x = dualVector[z, q];]; est] /; p >= 1
Now, let's use the OP's matrix as an example:
mat = N[{{2, 1, 4}, {3, 0, -1}, {1, 1, 2}}];
and check how good the estimator is in known cases:
(pnorm[ma, #] - Norm[ma, #]) & /@ {1, 2, Infinity} // InputForm {0., -2.2045227554556845*^-6, 0.}
(i.e. the estimate for the 2-norm is good to ~ 5 digits; adjusting either Samples
, Tolerance
, or both would give better results).
Let's compare with Robert's example:
pnorm[ma, 4, Tolerance -> 10^-9] // InputForm 5.695759123950937
Pretty close!
Finally, here is a plot of the $p$-norm of the OP's matrix with varying $p$:
