My professor found the cubic roots of a 3x3 matrix by doing the following. I don't understand how step 2 came about and why he applied the same for step 4 on row 1 instead of row 2.
Step 1:
$\begin{bmatrix} a & 2 & 2 \\ 2 & a & 2 \\ 2 & 2 & a \end{bmatrix}$
Step 2:
$=\begin{bmatrix} a & 2 & 2 \\ 2 & a & 2 \\ 0 & 2-a & a-2 \end{bmatrix}$
Step 3:
$=(a-2) \begin{bmatrix} a & 2 & 2 \\ 2 & a & 2 \\ 0 & -1 & 1 \end{bmatrix}$
Step 4:
$=(a-2)^2\begin{bmatrix} 1 & -1 & 0 \\ 2 & a & 2 \\ 0 & -1 & 1 \end{bmatrix}$
He then solved for the determinant with the remaining terms. The transition from step 1 to 2 is what is confusing me the most at the moment. I was under the impression that there was no easy way to find the cubic roots within a matrix.