In state space (SS) theory, the concept algebraic equivalence means, for matrix $A\in\mathbb{R}^{n\times n}$, column vector $B\in \mathbb{R}^n$ and row vector $C\in \mathbb{R}^n$, there is an invertible matrix $T$ such that \begin{align} \bar{A}&=TAT^{-1}\\ \bar{B}&=TB \tag{1}\label{eqtrue}\\ \bar{C}&=CT^{-1} \end{align} A method of solving this problem is provided as follows:
Solve $\begin{bmatrix}\bar{B} & \bar{A}\bar{B} & ... & \bar{A}^{n-1}\bar{B}\end{bmatrix} = T \begin{bmatrix}B & AB & ... & A^{n-1}B\end{bmatrix}$ for $T$ (under the assumption that $\begin{bmatrix}B & ... &A^{n-1}\end{bmatrix}$ and $\begin{bmatrix}\bar{B} & ...\end{bmatrix}$ are both full rank). Verify above relations using the $T$ obtained, that is, one system can be transformed in the other using $T$.
However, seeing as the next chapter discussed similarity transforms to a Jordan form, I had an itching feeling that maybe this could also be done using eigenvalues. I conjectured that maybe the following also implies algebraic equivalence:
\begin{equation} \mathrm{eig}\left(\left[\begin{array}{c|c} A & B\\\hline C & 0 \end{array} \right]\right) = \mathrm{eig}\left(\left[\begin{array}{c|c} \bar{A} & \bar{B}\\\hline \bar{C} & 0 \end{array} \right]\right)\tag{2}\label{eq1} \end{equation}
I.e., the set of eigenvalues of a matrix composed of writing $A,B,C$ in a square matrix (let's call it $SS$) equals the set of eigenvalues of the matrix composed of $\bar{A},\bar{B},\bar{C}$ (let's call that one $\bar{SS}$). For the examples I could find, this held up, but I fear that this may just be due to how these examples are 'cooked up'; and indeed, if this method would work, I should expect that we would be taught this method. I would love to find out how I could (dis)prove this, but I'm not sure how. I'm thinking along the lines of:
Construct a matrix $T^*$ as follows: \begin{align}T^*&=\left[\begin{array}{c|c} T & 0\\\hline 0 & 1 \end{array} \right]& \to (T^*)^{-1}&=\left[\begin{array}{c|c} T^{-1} & 0\\\hline 0 & 1 \end{array} \right]\end{align}
Then, we get \begin{equation} T^*\left[\begin{array}{c|c} A & B\\\hline C & 0 \end{array} \right](T^*)^{-1} = \left[\begin{array}{c|c} TAT^{-1} & TB\\\hline CT^{-1} & 0 \end{array} \right] \overset{?}= \left[\begin{array}{c|c} \bar{A} & \bar{B}\\\hline \bar{C} & 0 \end{array} \right] \end{equation}
Now, we need to see if there is a $T^*$ such that the above holds. From here on I'm speculating: I figured that there would be such a $T^*$ iff both matrices $SS$ and $\bar{SS}$ could be brought into the same diagonal form, so in other words, if \eqref{eq1} holds.
Furthermore, the transformation matrix should have the same structure as $T^*$. So, let $M$ be the matrix bringing $SS$ into diagonal form, and $\bar{M}$ the matrix bringing $\bar{SS}$ into diagonal form, $M\bar{M}^{-1}$ should have zeroes on the last column and row and a final $1$ on the diagonal. But, here I'm stuck; how could I prove such a thing? Or do I even need to prove the same structure, or is condition \eqref{eq1} already sufficient, and will the desired structure follow naturally?
How can I (dis)prove that the above relation \eqref{eq1} implies algebraic equivalence as in \eqref{eqtrue}?