I am reading an article at the moment and there is one step that I am having trouble understanding.
The article proves that if $P$ and $Q$ are commuting differential operators there is a non-trivial polynomial $f(s,t)$ such that $f(P,Q)=0$.
In order to to this it considers the eigenvalue problem $Py=Ey$ for any number $E$. This problem has $n$ ($n$ being the order of $P$) linearly independent solutions by basic existence and uniqueness theorem. These span a space, $V_E$. The article takes as a basis of this space the solutions $y_i$ with $\frac{d^j y_i}{dx^j}(0)=\delta_{ij}$.
If $Q$ commutes with $P$ then $Q$ maps $V_E$ into itself. Now the article claims, and this is the step I do not understand, that the matrix elements will be polynomials in $E$.
Anyone who can explain this will get my gratitude.
Edit: The article I am reading is "Methods of algebraic geometry in the theory of non-linear equations" by Krichever in Russian Math Surveys 32, 1977 . The operators the article studies are ordinary differential operators acting on $C^\infty(\mathbb{R},\mathbb{C})$. Operators are assumed to have constant leading coefficient. (So $P=\sum_{i=0}^n a_i x^i$ where $a_n$ is a non-zero constant.)
The theorem is Theorem 2.1 on page 9 of the pdf. It is specifically the second sentence of the proof that I have problems with.
I know that there actually is such a polynomial as claimed in the article as I know algebraic proofs of this fact. I am however trying to understand the analytic proof given by Krichever.