4
$\begingroup$

Let $A = \begin{pmatrix}0 & a\\ b&c\\ \end{pmatrix}$ with $a, b, c \in \mathbb{R}$. My question is that: ``How can we compute $A^n$ for any $n\in \mathbb{N}$?

In fact, one had that $A^2 - cA - abI_2 = 0$ or $A^2 = cA + abI_2$. Then I obtained \begin{eqnarray} A^2 &=& cA + abI_2\\ A^3 &=& (c^2 +ab)A + abcI_2\\ .....&&....................\\ \end{eqnarray}

  • 2
    Hint: Eigenvalue decomposition2017-02-08
  • 3
    @mathJuan: Hint to the first hint **diagonalize**.2017-02-08
  • 0
    Diagonalise and then you can raise to any power by raising the diagonal matrix because the basis change matrices cancel out.2017-02-08
  • 0
    Well, diagonal form *or* Jordan normal form. You'll need to work through the various cases if no constraints are given on the coefficients. It's not an enormous job, but probably too big for a proper answer here. And very, very standard.2017-02-08
  • 0
    @Moo: Yes, I knew that but I hope someone can give a form of $A^n$ as a linear combination of $A$ and $I_2$. Because the diagonalise is many cases.2017-02-08
  • 0
    @HaraldHanche-Olsen: Thank you Sir for your interests and your hints.2017-02-08
  • 0
    @mathJuan: The diagnoalization is pretty straightforward in this case - no cases needed.2017-02-08
  • 0
    Yes, but you need to consider the conditions for the coefficients $a, b, c$.2017-02-08

2 Answers 2

1

The characteristic polynomial of this matrix is $\lambda^2-\lambda c-ab$, so you know that $A^2=cA+abI$. From there, you can use the Binomial Theorem to express $A^n$ as a linear combination of $A$ and $I$.


Update: This involves repeated expansion of terms of the form $(cA+abI)^k$, which can get pretty tedious for even moderately-sized $k$. Fortunately, for $2\times2$ matrices, there are only three (really four) cases to consider. Note that this analysis holds for $2\times2$ matrices in general, not just those that have the specific form in the question.

Case 1: $c^2+4ab\gt0$ (real distinct roots $\lambda_1$ and $\lambda_2$). The matrix can be diagonalized as $A=B\Lambda B^{-1}$, where $\Lambda=\operatorname{diag}(\lambda_1,\lambda_2)$, and so $A^n=B\Lambda^nB^{-1}=B\operatorname{diag}(\lambda_1^n,\lambda_2^n)\,B^{-1}$. In practice, there’s no need to diagonalize the matrix, however. Define $$P_1={A-\lambda_2I\over\lambda_1-\lambda_2}, P_2={A-\lambda_1I\over\lambda_2-\lambda_1}$$ so that $A=\lambda_1P_1+\lambda_2P_2$. It’s not hard to show that these two matrices are idempotent (in fact, they are projections onto the respective eigenspaces), and that they are complementary, in that $P_1P_2=P_2P_1=0$, so $$A^n = (\lambda_1P_1+\lambda_2P_2)^n = \lambda_1^nP_1^n+\lambda_2^nP_2^n = \lambda_1^nP_1+\lambda_2^nP_2$$ since all terms that involve both $P_1$ and $P_2$ vanish.

Case 2: $c^2+4ab=0$ (repeated real root $\lambda$). This is actually two cases. If $A=\lambda I$, then $A^n=\lambda^nI$. Otherwise, the matrix will have Jordan normal form $\pmatrix{\lambda&1\\0&\lambda}$. This can be written as $\lambda I+\pmatrix{0&1\\0&0}$. The second matrix is nilpotent of degree 2, so $$\pmatrix{\lambda&1\\0&\lambda}^n=\left(\lambda I+\pmatrix{0&1\\0&0}\right)^n=\lambda^nI+n\lambda^{n-1}I\pmatrix{0&1\\0&0}=\pmatrix{\lambda^n&n\lambda^{n-1}\\0&\lambda^n}$$ since the other terms in the binomial expansion vanish. So if $A=B\pmatrix{\lambda&1\\0&\lambda}B^{-1}$ then $A^n=B\pmatrix{\lambda^n&n\lambda^{n-1}\\0&\lambda^n}B^{-1}$. As in case 1, however, there’s no need in practice to compute a Jordan basis $B$ for the matrix, because just as we decomposed the Jordan matrix above, we can write $A$ as the sum of a multiple of the identity and a nilpotent matrix. Set $N=(A-\lambda I)$. By the Cayley-Hamilton Theorem, $N^2=(A-\lambda I)^2=0$, so $$A^n=(\lambda I-N)^n=\lambda^nI+n\lambda^{n-1}N$$ because terms involving $N^2$ or higher vanish.

Case 3: $c^2+4ab\lt0$ (complex roots). Since the characteristic polynomial has real coefficients, the two eigenvalues are of the form $\alpha\pm i\beta$. $A$ is then similar to the conformal matrix $C=\pmatrix{\alpha&-\beta\\\beta&\alpha}$, i.e., $A=BCB^{-1}$ and so $A^n=BC^nB^{-1}$. The matrix $C$ is isomorphic to the complex number $\alpha+i\beta$, so $C^n$ can be computed via DeMoivre’s theorem, or by writing $C=\alpha I+\beta J$, where $J^2=-I$, expanding via the binomial theorem and collecting terms.

As with the other two cases, there’s no need in practice to compute the above decomposition of $A$. We can instead write $A$ as the sum of a multiple of the identity and another matrix with nice properties. Let $G=A-\alpha I=\beta H$, where $H^2=-I$. (By the Cayley-Hamilton theorem, $(A-\alpha I)^2+\beta^2I=0$, so $G^2=-\beta^2I$). Then $$\begin{align}A^n&=(\alpha I+\beta H)^n=\binom n0\alpha^nI+\binom n1\alpha^{n-1}\beta H+\binom n2\alpha^{n-2}\beta^2H^2+\cdots \\ &=\left(\binom n0\alpha^n-\binom n2\alpha^{n-2}\beta^2+\cdots\right)I+\left(\binom n1\alpha^{n-1}\beta-\binom n3\alpha^{n-3}\beta^3+\cdots\right)H.\end{align}$$

0

For small matrices like that you can generally do it directly by identification.

Let set $A_n=\pmatrix{\alpha_n & \gamma_n\\\beta_n & \delta_n\\}$ with $\alpha_0=\delta_0=1$ and $\beta_0=\gamma_0=0$ the identity matrix $I=A^0$.

$A_{n+1}=A_nA=\pmatrix{\alpha_n & \gamma_n\\\beta_n & \delta_n\\}\pmatrix{0 & a\\b & c\\}=\pmatrix{b\gamma_n & a\alpha_n+c\gamma_n\\b\delta_n & a\beta_n+c\delta_n\\}$

We get the system of coefficients by identification $\begin{cases} \alpha_{n+1}=b\gamma_n \\ \beta_{n+1}=b\delta_n \\ \gamma_{n+1}=a\alpha_n+c\gamma_n \\ \delta_{n+1}=a\beta_n+c\delta_n \end{cases}$

In particular we can solve directly $\begin{cases} \gamma_{n+1}=ab\gamma_{n-1}+c\gamma_n\qquad\gamma_0=0,\gamma_1=a \\ \delta_{n+1}=ab\delta_{n-1}+c\delta_n\qquad\delta_0=1,\delta_1=c \end{cases}$

They both have the same recurrence characterictic equation $x^2-cx-ab=0$

Note: no surprise here, this is also $\det(A-xI)$ and we will use eigenvalues.

Let's have $\lambda_1,\lambda_2$ the eigenvalues (i.e. the roots of this quadratic equation).

We know that $\gamma_n$ and $\delta_n$ are of the form $u{\lambda_1}^n+v{\lambda_2}^n$, I pass the detail of calcultating $u,v$ given the initial conditions and go directly to the result (hoping I made no error...).

$\begin{cases} \alpha_{n}=b\gamma_{n-1} \\ \beta_{n}=b\delta_{n-1} \\ \gamma_{n}=\frac{a}{\lambda_1-\lambda_2}({\lambda_1}^n-{\lambda_2}^n) \\ \delta_{n}=\frac{c-\lambda_2}{\lambda_1-\lambda_2}{\lambda_1}^n+\frac{\lambda_1-c}{\lambda_1-\lambda_2}{\lambda_2}^n \end{cases}$

Rem:

And then you pray for the eigenvalues to be real, else this is getting a bit annoying for practical calculus... They would be conjugated, though, if complex. You can alternately search for a form $\rho(u\cos(n\theta)+v\sin(n\theta))$

Also I haven't treated the case of a double root, in that case search for a form $(u+vn){\lambda}^n$, the rest of the process is identical.