How to show that $$\det A= \det \begin{pmatrix}\cos\frac{\pi}{n}&-\frac{\cos\theta_1}{2}&0&0&\cdots&0&-\frac{\cos\theta_n}{2} \\-\frac{\cos\theta_1}{2}&\cos\frac{\pi}{n}&-\frac{\cos\theta_2}{2}&0&0&\cdots&0\\ 0&-\frac{\cos\theta_2}{2}&\cos\frac{\pi}{n}&-\frac{\cos\theta_3}{2}&0&\cdots&0\\ \vdots&\vdots&-\frac{\cos\theta_3}{2}&\ddots&\ddots&\vdots&\vdots\\\vdots&\vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots&-\frac{\cos\theta_{n-1}}{2}\\ -\frac{\cos\theta_n}{2}&0&\cdots&0&0&-\frac{\cos\theta_{n-1}}{2}&\cos\frac{\pi}{n} \end{pmatrix} \geq 0, $$ where $\sum\limits_{i=1}^n\theta_i=\pi$, $0<\theta_i<\frac{\pi}{2}, i=1,\dots,n$, and $n\ge 3$?
How to show determinant of a specific matrix is nonnegative
-
0Well, if the product of eigenvalues of a matrix are greater than $0$, then so is the determinant. – 2011-05-13
-
0I don't think the eigenvalues of $A$ are obvious. – 2011-05-13
-
1It might be helpful to tell us where the matrix comes from. – 2011-05-13
-
0@Sunni: Yes thats true! – 2011-05-13
-
0@user9325: It is reformulated from a nonnegative definite polynomial. ...The remaining task is to show the determinant is nonnegative. – 2011-05-13
-
0It actually seems to be a positive definite matrix (more strong that determinant positive). But not trivial to show. – 2011-05-13
-
1@leonbloy: it is only positive definite for $n>3$. For $n=3$ the determinant is 0. – 2011-05-13
-
0@Sunni And what makes you think that *it is true, *the reformulation is simpler than the original question? – 2011-05-13
-
0Probably Induction? – 2011-05-14
-
0@Fabian: for $n=3$, the determinant seems to be $0$ for all choices of $\theta_j$, but I don't see why. Do you have a proof? – 2011-05-14
-
0@mac: Yes. But it involves some calculation... (nothing nice). By the way, the determinant is also zero for $n$ larger than 3, if all the $\theta$s are 0 except for one which is $\pi$. – 2011-05-14
-
0I don't know for you guys, but I have a great deal of intuition when I see $\pi/n$ as the average value of the $\theta$'s. I think it makes us guess a little more easily why should the matrix be positive definite. P.S. : Fabian, it is supposed that all angles are in the interval $]0, \pi/2[$, read more carefully. – 2011-05-14
-
0@Fabian: I see now why the determinant is zero for $n=3$. Writing $c_j=\cos(\theta_j)$ and $s_j=\sin(\theta_j)$, you get $\det(2A)=1-(c_1^2+c_2^2+c_3^2+2c_1c_2c_3)$ and then you use the fact that the $\theta_j$ sum to $\pi$ to see that $c_3=s_1s_2-c_1c_2$, and since $s_j^2=1-c_j^2$, the rest is algebra. For $n>3$, another special case where the determinant is zero is if all of the $\theta_j$'s are $\pi/n$, because the rows then sum to zero. – 2011-05-14
-
0@user9325:This formualtion is an equivalent one, even if the one can prove using other methods. I want to see is if there is a matricial proof, i.e., show the determinant is a nonnegative one. – 2011-05-14
-
2@Sunni: The context can be quite helpful even for finding a "matricial" proof, in particular for finding the proper generalization. It is also helpful to know how you *know* that it is true before trying to prove it. But of course, you are not obligated to be helpful. – 2011-05-14
-
0@user9325: I will email you a copy of the background if you wish(I don't know your email), it is not esay to post an article here. – 2011-05-14
-
2The condition of positive matrix (if true) would be equivalent to the following: $\sum x_i^2 cos(\pi/N) \ge \sum x_i x_{i+1} cos(w_i)$ for any $x_i$, any $w_i$ in the first cuadrant with sum $\pi$, $N \ge 3$ - and the $x_{i+1}$ is to be understood mod N. – 2011-05-15
3 Answers
It was pointed out in the comments that you can interpret the matrix as a bilinear form and obtain the inequality
$$ x^T A x = \cos(\pi/N) \sum_{i=1}^N x_i^2 - \sum_{i=1}^N x_i x_{i+1} \cos\theta_i \geq 0 .$$
So, if this bilinear form is positive semidefinite, $x^T A x \geq 0$, then all eigenvalues are nonnegative and the determinant must also be nonnegative.
To show that, we rewrite the inequality by shuffling the terms a bit and adding $\sum x_i^2$ on both sides:
$$ \sum \left((x_i^2+x_{i+1}^2)/2 - x_i x_{i+1} \cos\theta_i \right) \geq (1-\cos (\pi/N)) \sum x_i^2 .$$
The left-hand side looks suspiciously like the cosine formula for the side of a triangle. In particular, this looks like a set of vectors in the complex plane that emanate from the origin and are separated by angles $\theta_1,\theta_2,\dots$ and so on. This makes a lot of sense since the angles are meant to add up to $\pi$.
We can make this geometric insight explicit by defining complex numbers $z_i = x_i e^{\sum_{k=1}^{j-1} \theta_j}$ and $z_{N+1} = -z_1$. The inequality becomes
$$ 1/2 \sum |z_{i+1}-z_i|^2 \geq (1-\cos(\pi/N)) \sum |z_i|^2 .$$
This looks a lot easier, because the angles are finally gone. That said, we are now dealing with complex Hilbert spaces, but they tend to be easier anyway.
In particular, let $P$ be the "permutation operator"
$$ P (z_1,z_2,\dots,z_n)^T := (z_2,z_3,\dots,z_n,-z_1)^T .$$
It is easy to check that this is a unitary operator, $P^\dagger=P^{-1}$. The inequality can be written as
$$ ||(P-1)z||^2 \geq (2-2\cos(\pi/N)) ||z||^2 ,$$
We can prove this by showing that the eigenvalues $\lambda_k$ of the operator $P-1$ are bounded from below by $|\lambda_k|^2 \geq 2-2\cos(\pi/N)$. After all, the operator $P-1$ is normal, so we can apply the spectral theorem.
Calculating the eigenvalues of the operator $P$ is easy, because repeated application of $P$ obviously yields $P^N=-1$, but this must already be the minimal polynomial. Hence, the eigenvalues of $P-1$ are
$$ \lambda_k = e^{i\pi k/N}-1, \quad 0 \leq k \leq 2N, k \text{ odd} .$$
Clearly, the case $k=1$ has the smallest magnitude and we have
$$ |\lambda_1|^2 = \sin^2(\pi/N) + (1-\cos^2(\pi/N)) = 2-2\cos(\pi/N) .$$
This proves the inequality. $\square$
-
0Wait, nevermind, that proof looks sound. Finally! – 2011-07-03
-
0That is great. Is it possible to find a matricial proof? – 2011-07-03
-
0@Sunni: What do you mean with "matricial" proof? The operator $P$ is given by a matrix (which I didn't write explicitly) and the key part of the proof is to estimate its eigenvalues. – 2011-07-04
-
1Looks promising. But please remind me, why are we allowed to draw the conclusion $||(P-1)z||^2\ge|\lambda_1|^2||z||^2$ ? If $P-1$ were a Hermitian operator, this would be ok, but that doesn't seem to be the case here. I want to rule out the possibility like $$A=\left(\begin{array}{cc}1&a\\0&1\end{array}\right),$$ where the eigenvalues are both $1$ irrespective of the value of $a$, but $z=(a,-1)^T$ behaves badly: $Az=(0,-1)^T$ has norm 1, so with a choice of $a$ we can make $||Az||/||z||$ as small as we want. – 2011-07-05
-
0Never mind. I figured it out. The eigenvectors belonging to different eigenvalues are orthogonal, and that is surely enough. That is automatic for Hermitian matrices, but can be verified here easily enough. The inner product of two eigenvectors belonging to distinct eigenvalues is a sum of $N^{th}$ roots of unity. – 2011-07-05
-
0@Jyrki Lahtonen: Actually, that's a very good point, I forgot about this. We want that $P-1$ is a *normal* operator, i.e. commutes with its hermitian conjugate. This is true because $P$ is actually unitary, $P^\dagger P = 1$. – 2011-07-05
-
0I didn't look into it in detail, but one question: is the condition $0< \theta_i < \pi/2$ used/required? – 2011-07-05
-
0Nice! @leonboy: I think the $\theta_i$ can be arbitrary, provided they sum to $\pi$ (since this is what makes $z_{N+1}=-z_1$ work). – 2011-07-05
(too long for a comment, but hopefully somebody more skilled than me can flesh this out as a full answer.)
The matrix in question is a symmetric, cyclic tridiagonal (a.k.a. periodic tridiagonal) matrix, and can be decomposed as
$$\mathbf A=\begin{pmatrix}\cos\frac{\pi}{n}&-\frac12\cos\theta_1&&\\-\frac12\cos\theta_1&\cos\frac{\pi}{n}&\ddots&\\&\ddots&\ddots&-\frac12\cos\theta_{n-1}\\&&-\frac12\cos\theta_{n-1}&\cos\frac{\pi}{n}\end{pmatrix}+\begin{pmatrix}1&0\\0&\vdots\\\vdots&0\\0&1\end{pmatrix}\cdot\begin{pmatrix}0&\cdots&0&-\frac12\cos\theta_n\\-\frac12\cos\theta_n&0&\cdots&0\end{pmatrix}$$
or in abbreviated form, $\mathbf A=\mathbf T+\mathbf U\mathbf V^T$. Using the Sherman-Morrison-Woodbury formula for determinants, we have
$$\det\mathbf A=(\det\mathbf T)\det(\mathbf I+\mathbf V^T\mathbf T^{-1}\mathbf U)$$
Now, a three-term recurrence can be set up to compute $\det\mathbf T$; letting $p_0=1$ and $p_k$ be the determinant of the $k\times k$ leading submatrix of $\mathbf T$, we can establish the recurrence
$$p_k=p_{k-1}\cos\frac{\pi}{n}-\frac{\cos^2 \theta_{k-1}}{4}p_{k-2}$$
and then $\det\mathbf T=p_n$. There is also an explicit formula for the first and last columns of the inverse of a tridiagonal matrix (the result of $\mathbf T^{-1}\mathbf U$) which I will have to look up (unless somebody beats me to it), and proving the nonnegativity of $\det\mathbf A$ boils down to proving that either $p_n$ and $\det(\mathbf I+\mathbf V^T\mathbf T^{-1}\mathbf U)$ have the same sign, or that one of the two is zero.
-
0I guess your $\det\mathbf A=(\det\mathbf T)\det(\mathbf I+\mathbf V^T\mathbf A^{-1}\mathbf U)$ should be $\det\mathbf A=(\det\mathbf T)\det(\mathbf I+\mathbf V^T\mathbf T^{-1}\mathbf U)$. the Sherman-Morrison-Woodbury formula for determinants is usually for $U,V$ of rank one, http://www.nd.edu/~jeff/Teaching/CBE60478/ShermanMorrison.pdf . Can you show me the reference for your formula $\det\mathbf A=(\det\mathbf T)\det(\mathbf I+\mathbf V^T\mathbf T^{-1}\mathbf U)$. – 2011-05-18
-
0Yes @Sunni; I'll correct that typo. Faddev/Faddeva or Aitken's old books should have it, but if you're knowledgable with Schur complements, it shouldn't be too hard to reckon it yourself. – 2011-06-02
This is not an aswer, but was too long for a comment. I suspect that for $N>3$ the matrix is (semi)positive definite, a stronger property than what is asked. As I pointed in a comment, to prove this is equivalent to prove the following:
$$cos(\pi/N) \sum_{i=1}^N x_i^2 \ge \sum_{i=1}^N x_i x_{i+1} cos(\omega_i)$$
for any N angles $0<\omega<\pi/2$ that sum up to $\pi$, and any N real numbers $x_i$, $i=1\cdots N$ (the above formula assumes the convention $x_{N+1}\equiv x_1)$.
We can clearly assume $x_i>0$. Lets fix $N=4$ (only for illustration). Imagine now $N=4$ complex numbers in the upper semiplane, $(z_1, z_2, z_3, z_4)$ with increasing angles, each having modulus $x_i$, and $z_1=x_1$ (real). If we further assume that the consecutive angle differences do not exceed $\pi/2$ (you can imagine also an extra $z_{5}= -z_1$), the LHS can be writen as
$$ \sum_{i=1}^N x_i x_{i+1} cos(\omega_i) = Re\left( z_2 \; \bar{z_1} + z_3 \bar{z_2} + z_4 \; \bar{z_3} - z_1 \; \bar{z_4} \right)$$
Notice, BTW, that the terms (all real and imaginary parts) are positive.
We know, applying the Cauchy–Schwarz inequality, that
$$ \sum x_i^2 \ge | \left( z_2 \; \bar{z_1} + z_3 \bar{z_2} + z_4 \; \bar{z_3} - z_1 \; \bar{z_4} \right) | $$
We are not there yet - we'd need to show that the complex sum has -say- a substantial imaginary component, so that taking the real part we have a fraction, and that componsates for the missing factor $cos(\pi/N)$ - I'm not sure if it can be done.
-
0The special case for $N=3$, in which the equality applies, is casually mentioned here: http://math.stackexchange.com/questions/49145/cos-hatabc-a-cos-hatbc-ab-cos-hatc-frac-a2-b2-c22 – 2011-07-03