4
$\begingroup$

I have a linear operator $T_1$ which acts on the vector space of polynomials in this way: $$T_1(p(x))=p'(x).$$

How can I find its eigenvalues and how can I know whether it is diagonalizable or not?

  • 0
    I think you must first specify the dimension of your vector space, and after that find the matrix representation of your operator. *Then* you can find the eigenvalues...2011-03-16
  • 0
    Terminology note: The polynomials do not form a field. They are an integral domain if you include multiplication, but in this problem they are only being considered as a vector space.2011-03-16
  • 0
    @arturo and jonas: Thank you for corrections. I'll know better for next time.2011-03-16
  • 3
    @Jose: You don't need a matrix representation to find eigenvalues (though when the space is finite dimensional, this provides an algorithm to find them).2011-03-16

3 Answers 3

9

In the finite dimensional case, finding the eigenvalues can be done by considering the matrix of the operator, computing the characteristic polynomial, and finding the roots. This is not possible in the infinite dimensional case (as occurs in the case of the vector space of all polynomials with coefficients in $F$), because there is no matrix for the operator and no characteristic polynomial.

Instead, you have to go back to the definitions. An eigenvalue of $T$ is a scalar $\lambda$ for which there exists a nonzero vector $\mathbf{x}$ with $T(\mathbf{x}) = \lambda\mathbf{x}$.

Suppose $\mathbf{x}$ is an eigenvector. What can we say about $\mathbf{x}$ and $\lambda$? As usual in this kind of cases, we write down what everything means, and see what this entails/implies. Often, we can gain enough information to figure out who $\mathbf{x}$ and $\lambda$ have to be.

Let's write $\mathbf{x} = a_nx^n + \cdots + a_0$, with $a_n\neq 0$ (we know at least one coefficient has to be nonzero for $\mathbf{x}$ to be nonzero, a precondition for being an eigenvector; and so we may as well write it going up to just the degree we need; so we are going to write $x^2+0x+1$, but not $0x^4+0x^3+x^2+0x+1$, in order to make our life easier).

Then the equation $T(\mathbf{x}) = \lambda\mathbf{x}$ becomes $$ na_nx^{n-1}+\cdots + a_1 = \lambda a_nx^n + \lambda a_{n-1}x^{n-1}+\cdots + \lambda a_0.$$ This gives you a system of equations \begin{align*} \lambda a_n &= 0\\ \lambda a_{n-1} - na_n &= 0\\ \lambda a_{n-2} - (n-1)a_{n-1} &= 0\\ &\vdots\\ \lambda a_1 - 2a_2 &= 0\\ \lambda a_0 - a_1 &=0. \end{align*} Since we are assuming $a_n\neq 0$, it should be an easy matter to determine all eigenvalues, and all corresponding eigenvectors from this.

Now, a linear transformation is diagonalizable if and only if there is a basis for the vector space that consists entirely of eigenvectors. Since you know by now what all the eigenvectors of $T$ are, you can figure whether you can find a linearly independent set of eigenvectors that spans the vector space of all polynomials. If you can, then $T$ is diagonalizable. If you cannot, then $T$ is not diagonalizable.

  • 0
    There is a matrix indexed by pairs of nonnegative integerse with respect to the basis $\{1,x,x^2,\ldots\}$ having the sequence $(1,2,3,4,\ldots)$ on the superdiagonal and $0$ elsewhere. Similarly, a diagonal operator on a countable dimensional space has a diagonal matrix representation with respect to a basis of eigenvectors. The trouble is that the matrix approach in infinite dimensions is much more limited in its usefulness, e.g. for the lack of determinant or trace making sense in general. In this case, the lack of a characteristic polynomial is an important point.2011-03-16
  • 0
    @Jonas: Of course; assuming AC, any linear transformation on any vector space (finite or infinite dimensional, countable or uncountable) can be represented by a "generalized column-finite matrix" and a lot of things can be done with that.2011-03-16
  • 0
    Still having a hard to realize why is an different from 0? You can have a polynomial that is nonzero with an=0..2011-03-16
  • 0
    @Nir Avnon: I explicitly explained: you know **some** coefficient is nonzero, so write out the polynomial going up **only** up to the largest nonzero coefficient. You can always stop at the last nonzero coefficient, so we do that to make our life simple. There is absolutely no point in writing something like$$0x^7 + 0x^6 +0x^5 + 0x^4 + 0x^3 + 0x^2+7x+1$$ when you mean "$7x+1$". So don't write it out the silly way, write it out as "$7x + 1$". Just stop at the highest nonzero term and be done with it.2011-03-16
  • 0
    Ok now I get you. Thanks again.2011-03-16
4

Take the derivative of $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ (with $a_n\neq 0$), and set it equal to $\lambda a_nx^n+\cdots+\lambda a_0$. Look particularly at the equality of the coefficients of $x^n$ to determine what $\lambda$ must be. Once you know what the eigenvalues are, consider which possible diagonalized linear transformations have that eigenvalue set, and whether such linear transformations can be similar to differentiation.

  • 0
    Since the vector space of polynomials is infinite dimensional, talking about matrices might not be the right way to go.2011-03-16
  • 0
    @Arturo: You're probably right. However, "diagonalizability" and "matrices" are both referring to choosing basis representations of a linear transformation. I'll edit to make it more carefully worded.2011-03-16
  • 0
    @jonas: still can't figure out how can I know what are the eigenvalues after looking about the coefficents. sorry, I'm new in this.2011-03-16
  • 0
    @Nir: What do you get after following the instructions in the first sentence of my answer? Remember that 2 polynomials are equal if and only if their coefficients are equal.2011-03-16
  • 0
    @jonas: sure, I know this. I get גan=0, so ג=0, why does an have to be different from 0? does the only eigenvalue is 0? what should I do with this information? Thank you for the paitance.2011-03-16
  • 0
    @Nir: **eigenvectors** have be different from $0$ (by definition); but $0$ is a permissible eigen **value**.2011-03-16
  • 0
    @Nir: The assumption $a_0\neq 0$ is equivalent to assuming that the polynomial is not zero. Eigenvectors are nonzero, and so you can just take $n$ to be the degree of the polynomial, which is always a well-defined nonnegative integer for a nonzero polynomial. So good, you can conclude that the only eigenvalue is $0$. Now what does a diagonal linear transformation look like if $0$ is its only eigenvalue?2011-03-16
  • 0
    ammm.. That the ker is 0?Injective mapping? I'm not sure..2011-03-16
  • 0
    @Nir: If $\mathbf{v}_1$ is an eigenvector corresponding to $\lambda=0$, then $\mathbf{v}_1\neq\mathbf{0}$ (it's an eigenvector), and $T(\mathbf{v}_1) = \lambda\mathbf{v}_1 = 0\mathbf{v}_1 = \mathbf{0}$. Is the kernel of $T$ equal to $0$? (Did you try to work it out before posting your follow-up question, or did you just try to see if you could figure it out off the top of your head?)2011-03-16
  • 0
    No, the kernel is not $0$. I think that you would do well to review what diagonalizable means as a prerequisite to solving a problem that makes essential use of the definition. A diagonalizable linear transformation has a basis of eigenvectors. Having even a single eigenvector $v$ with eigenvalue $0$ tells you that $Tv=0$, implying that $T$ is not injective. Having a basis of eigenvectors with eigenvalue $0$ tells you much more.2011-03-16
  • 0
    Sorry guys, I made a mess with all the definition. I'll review everything now, hope I'll get it all ok. Thanks.2011-03-16
3

Differentiating lowers the degree, so the only case where you get out a scalar multiple of what you put in is when you differentiate a constant.