3
$\begingroup$

I have a linear system $Ax=b$, where

  • $A$ is symmetric, positive semidefinite, and positive. $A$ is a variance-covariance matrix.
  • vector $b$ has elements $b_1>0$ and the rest $b_i<0$, for all $i \in \{2, \dots, N\}$.

Prove that the first component of the solution is positive, i.e., $x_1>0$.

Does anybody have any idea?

  • 0
    Thanks. What about if A is positive definite? Is in that case first component positive? I can also "normalize" diagonal elements to be equal to 1 since they are variances, and rest of elements to be in interval [0,1], which would then be correlation coefficients.2012-07-03

2 Answers 2

1

I don't think $x_1$ must be positive.

A counter example might be a positive definite matrix $A = [1 \space -0.2 ; \space -0.2 \space 1]$ with its inverse matrix $A^{-1}$ having $A_{11}, A_{12} > 0$.

- Edit: Sorry. A counter example might be a normalized covariance matrix

$ A= \left( \begin{array}{ccc} 1 & 0.6292 & 0.6747 & 0.7208 \\ 0.6292 & 1 & 0.3914 & 0.0315 \\ 0.6747 & 0.3914 & 1 & 0.6387 \\ 0.7208 & 0.0315 & 0.6387 & 1 \end{array} \right) $.

  • 0
    IT seems that A is not only restricted to be positive definite, but also must have positive elements.2012-07-05
0

Someone bumped up this old question today. For sake of having an answer, we will see that the problem statement arises from a classical property of $M$-matrices.

Let $D=\operatorname{diag}(1,-1,\ldots,-1),\ M=DAD+tI,\ y=Dx$ and $q=Db+ty$. Note that $y_1$ has the same sign as $x_1$ and $DAD$ is positive semidefinite. Pick a sufficiently small $t>0$. Then $q$ is positive and $M$ is positive definite (hence nonsingular). However, as $M=DAD+tI$, all off-diagonal entries of $M$ are negative. This makes $M$ an $M$-matrix. Now the problem boils down to the following known property of $M$-matrices:

Suppose $M$ is a matrix whose eigenvalues have positive real parts and its off-diagonal entries are negative. Then $M^{-1}>0$ (entrywise). Consequently, if $My=q>0$, then $y>0$.

Proof. See e.g. Horn and Johnson's Topics in Matrix Analysis. The usual proof is very easy. By the given assumptions on $M$, when $\alpha>0$ is sufficiently large, $P=\alpha I - M$ is nonsingular and (entrywise) positive and hence $ M^{-1}=\frac1\alpha\left(I-\frac1\alpha P\right)^{-1} =\frac1\alpha\left(I+\frac1\alpha P+\frac1\alpha^2 P^2+\ldots\right)>0. $