17
$\begingroup$

Consider the matrix $A$ whose elements are $A_{ij} = a^{|i-j|}$ for $-1 and $i,j=1,\dots,n$

e.g. for $n=4$ the matrix is

$A = \left[ \begin{matrix} 1 & a & a^2 & a^3 \\ a & 1 & a & a^2 \\ a^2 & a & 1 & a \\ a^3 & a^2 & a & 1 \end{matrix} \right]$

Is this matrix always positive definite? If so, what is the simplest way to see that?

I strongly suspect that the matrix is positive definite for all $n$ and $a$, but am having trouble coming up with a proof.

Extra credit: The eigenvalues seem to lie within an interval $[\lambda_{\rm min}, \lambda_{\rm max}]$ which is a function of $a$ but not of $n$. For example, for $a=1/2$ all eigenvalues lie in $[1/3, 3]$.

Is it true for for general $a$, the eigenvalues lie in $[\lambda(a)^{-1}, \lambda(a)]$? If so, what is $\lambda(a)$?

Also, the eigenvectors seem to have a particularly regular form. In particular, they look like that could be expressed as simple combinations of trigonometric functions. Is this the case?

  • 2
    The matrix is a symmetric Toeplitz matrix and you can find several results with these buzzwords. Probably this helps.2011-11-28

5 Answers 5

14

It is positive-definite for all $n$.

The way I see it is by noting that it has Cholesky decomposition $A = LL^\top$ where $L = \begin{bmatrix} 1 & & & \\a & \sqrt{1-a^2} & & &\\a^2 & a\sqrt{1-a^2} & \sqrt{1-a^2} & &\\a^3 & a^2\sqrt{1-a^2} & a\sqrt{1-a^2} & \sqrt{1-a^2} &\\\vdots & \vdots & \vdots & \vdots & \ddots\end{bmatrix}.$

It's not hard (though not "obvious", perhaps) to see that this decomposition is correct: the dot product of a row with itself telescopes to 1, and of two different rows, to the appropriate power of $a$.

  • 3
    It's easy to compute the first few terms, from which you can then look for a pattern. Clearly $L_{1,1}^2 = A_{1,1}$. Then $L_{r,1} L_{1,1} = A_{r,1}$, which gives you the whole first column of $L$. From there you can start on the second column: $L_{2,1}^2+L_{2,2}^2 = A_{2,2}$, then $L_{r,1} L_{2,1} + L_{r,2} L_{2,2} = A_{r,2}$ gives you the rest of the second column, etc.2011-11-28
11

Is this matrix always positive definite? If so, what is the simplest way to see that?

It is. A simple way to prove it is to note that this is (a submatrix of) the covariance matrix of any process $(x_n)$ such that, for every integer $n$, $x_n=ax_{n-1}+z_n$ where $(z_n)$ is i.i.d. with variance $1-a^2$. An alternative formulation of this recursion is that, for every $n$, $ x_n=\sum_{k\geqslant0}a^kz_{n-k} $ When one computes the covariance of $x_n$ and $x_{n+m}$, every product $z_iz_j$ with $i\ne j$ disappears because the $z_k$ are independent, and every product $z_i^2$ yields $1-a^2$, hence $ \mathrm{Cov}(x_n,x_{n+m})=\sum_{k,\ell\geqslant0}(1-a^2)a^{k+\ell}\cdot[n-k=n+m-\ell], $ that is, $ \mathrm{Cov}(x_n,x_{n+m})=(1-a^2)\sum_{k,\ell\geqslant0}a^{k+\ell}\cdot[\ell=m+k]=(1-a^2)\sum_{k\geqslant0}a^{m+2k}=a^m. $ In other words, the infinite dimensional vectors $\mathbb x=(x_n)$ and $\mathbb z=(z_n)$ are such that $\mathbb x=\mathbb T\cdot \mathbb z$ where $\mathbb T$ is the lower triangular infinite dimensional matrix defined by $T_{n,n+k}=0$ if $k\geqslant1$ and $T_{n,n-k}=a^k$ if $k\geqslant0$. The infinite dimensional matrix $\mathbb A$ whose every finite dimensional matrix $A$ is a submatrix, is $ \mathbb A=\mathrm{Cov}(\mathbb T\mathbb z,\mathbb T\mathbb z)=\mathbb T\cdot\mathrm{Cov}(\mathbb z,\mathbb z)\cdot \mathbb T^*=\mathbb T\cdot \mathbb T^*. $ In particular for every vector $y$ of size $n$, extending $y$ to an infinite dimensional vector $\mathbb y$ by adding zeroes, one gets $ y^*\cdot A\cdot y=\mathbb y^*\cdot \mathbb A\cdot \mathbb y=\mathbb y^*\cdot \mathbb T\cdot \mathbb T^*\cdot \mathbb y=\|\mathbb T^*\cdot \mathbb y\|^2\geqslant0. $

  • 0
    Nice Solution...2016-08-03
6

Quick checks in Maple show that for the first cases, the determinant of $A$ is $(1-a^2)^{n-1}$, which is positive. I think this can be proved recursively. This proves that all the diagonal submatrices have positive determinant, which means that $A$ is positive definite..

  • 0
    Indeed, if $D_n=\det A$, then it is easily shown that $D_n=(1-a^2)D_{n-1}$ for $n\ge2$ by the operation $R_i'=R_i-aR_{i+1}$, $i=1,2,...,n-1$ which transforms the matrix to lower triangular form.2017-03-22
5

(Too long for a comment.)

What you have is in fact a family of well studied matrices: the Kac-Murdock-Szegő (KMS) matrices. They are generated for instance by the MATLAB command gallery('kms',n,a).

In Grenander and Szegő, and in William Trench's note, there are explicit expressions for the eigenvalues of the KMS matrix $\mathrm{KMS}_{i,j}(a)=a^{|i-j|}$:

$\lambda_k=\frac{1-a^2}{1+a^2-2a\chi_k}$

where the $\chi_k$ are the $n$ roots of the polynomial

$U_n(x)-2a U_{n-1}(x)+a^2 U_{n-2}(x)$

and $U_n(x)$ is the Chebyshev polynomial of the second kind. (I have observed that Chebyshev polynomials do tend to figure prominently in the theory of Toeplitz matrices.) It is also noted in those references that the inverse of a KMS matrix is tridiagonal.

The $\chi_k$ are shown to satisfy the inequality $-1 < \chi_k < 1$ (they're cosines of angles); it can then be shown from this that the $\lambda_k$ are always positive.

2

Gershgorin circles give you an upper bound on the eigenvalues which is $1+a +a^2+\dots a^{n-1}$.

Using the Cholesky decomposition user7530 provided one finds the inverse of $A$ in closed form $ A^{-1} = \begin{bmatrix} \frac{1}{1-a^2} & -\frac{a}{1-a^2} & 0 & \dots \\ -\frac{a}{1-a^2} & \frac{1}{1-a^2} & -\frac{a}{1-a^2} & \ddots\\ 0 & \ddots & \ddots & \ddots \end{bmatrix}. $ Now, Gershgorin disks give you an upper bound on the largest eigenvalue of $A^{-1}$ and hence, a lower bound on the smallest eigenvalue.

  • 0
    Since the Gershgorin disc theorem is that every eigenvalue lies within *at least one* of the Gershgorin discs, wouldn't the upper bound be $1 + 2(a + a^2 + \cdots + a^{(n-1)/2})$? For the case of $a=1/2$ you'd then have $\lambda_{\rm max} = 1 + 2(.5 + .25 + \cdots)$ which tends to 3 as $n\to\infty$, as I observed in the question?2011-11-28