6
$\begingroup$

Let $A = [A_{ij}]$ be an $n\times n$ square matrix with complex entries, and let $\sigma_k$, $k=1,\ldots, n$ be its singular values. Suppose that the squared Frobenius norm satisfies $$ \mathrm{Tr}(A^\dagger A) = \sum_{i,j=1}^{n}|A_{ij}|^2 = \sum_{k=1}^n\sigma^2_k=1 \>, $$ where $A^\dagger$ is the conjugate transpose of $A$.

Is the vector given by the absolute values squared of the entries, $(|A_{ij}|^2)_{ij}$, majorized by the vector $(\sigma_k^2)_k$? (As usual when discussing majorization, with proper padding of 0's, so that both vectors have $n^2$ elements.)

Consider vectors $(a_i)$ and $(b_i)$ of length $m$ such that $\sum_{i=1}^m a_i = \sum_{i=1}^m b_i$. Then, we say that $a$ majorizes $b$ if $\sum_{i=1}^k a_{(i)} \geq \sum_{i=1}^k b_{(i)}$ for each $1 \leq k \leq m$, where $a_{(i)}$ denotes the $i$th largest element of $(a_i)$ and likewise for $b_{(i)}$ with respect to $(b_i)$.

For a more detailed definition of majorization, please see http://en.wikipedia.org/wiki/Majorization .

I looked numerically for a counterexample, and found none. If it is true, I would suppose it is well-known, and in case I would appreciate a reference as precise as possible.

Thank you!

  • 0
    I presume the $\sigma_k$ are arranged in descending order?2011-04-15
  • 0
    @joriki: the notion of majorization is such that involves ordering the entries of the vectors in descending order; for more details please see the first lines of the Wikipedia entry @cardinal: thanks, but yours is not a countexample. For one thing, the matrix you propose does not satisfies the hypothesis $\textrm{Tr}(A^\dagger A)=1$. If properly rescaled by multiplying it by $1/\sqrt{2}$, it satisfies the conjecture, as it has only one singular value equal to 1.2011-04-15
  • 0
    Sorry to delete my comment. My example was related to a symmetric idempotent matrix. I misunderstood the question, as I suspected and was deleting the comment to cut down on noise. In the end, it appears I've added to it.2011-04-15
  • 0
    @cardinal: no problem :) Thanks for your consideration of the question!2011-04-15
  • 0
    I've proposed an edit to your question to hopefully make the constraint a little more visible. As it was formatted, I actually missed it the first time around. After my edit is peer-reviewed, be sure to check it to see that no errors have been introduced.2011-04-15
  • 0
    Two possible attacks on the problem come to mind. One might be via some variant of the [Gerschgorin circle theorem](http://en.wikipedia.org/wiki/Gershgorin_circle_theorem) and the second is to look at the proof in L. Mirsky, [A trace inequality of John von Neumann](http://www.springerlink.com/content/gm0jtq8471535450/), 1975, which if I recall, uses a majorization(-like) argument.2011-04-15
  • 0
    @cardinal: thanks! Looks good to me.2011-04-15
  • 0
    @Marco, I believe I may have a proof, at least for *real* $A$. I'm checking it and if it goes through, I'll post it a bit later. I think it should also hold for complex-valued $A$ with only minor modifications to the proof.2011-04-15
  • 0
    @cardinal: thanks for the suggestions and for the hope you give me :) I did not have much time to skim through Horn and Johnson today, but I am hoping the answer is there (somewhere :) )2011-04-15
  • 1
    In Horn&Johnson, "Topics in Matrix Analysis", problem 21 of chapter 3.3, a related majorization relation is proven (Eq. (3.3.36)) , but it involves only the diagonal elements of the matrix rather than all the entries of the matrix. What I ask seems much stronger...2011-04-15
  • 0
    A trivial observation: the conjecture is obviously true for $2\times 2$ matrices. Indeed, as soon as we sum the squares of the two singular values we have already reached the maximum. On the other hand, by definition the square of the largest singular value is larger than the largest absolute value squared of every element of the matrix.2011-04-15
  • 0
    OK, I think I got the proof :) I will post it as soon as possible.2011-04-15

2 Answers 2