0
$\begingroup$

My book gives the following definition:

Let $A$ be a $(n\times n)$-matrix with real components. A nonzero vector $v\in\mathbb R^n$ is an eigenvector of $A$ if there exists $\lambda\in\mathbb R$, such that $A\cdot v=\lambda v$. This value $\lambda$ is the eigenvalue that belongs to eigenvector $V$.

The way I interpret this definition, is that we are only given an eigenvalue, when we are given an eigenvector. But does a scalar being an eigenvalue imply there is an eigenvector?

How I see the definition: $$ v=\text{eigenvector}\implies\exists\lambda\in\mathbb R:A\cdot v=\lambda v. $$ But does the definition really say the following: $$ \lambda(\in\mathbb R)=\text{eigenvalue}\implies\exists v\in\mathbb R^n:A\cdot v=\lambda v? $$ I was thinking of using the contrapositive: assume that for each $v\in\mathbb R^n:A\cdot v\neq \lambda v$. This means that for each $v$, $\lambda$ is not an eigenvalue. Therefore $\lambda$ is not an eigenvalue.

Am I correct that I have to use this contrapositive? Or is the definition lacking?

  • 1
    According to your definition, if $\lambda$ is an eigenvalue, then it has to be associated to some eigenvector $v$. In other words, to say that $\lambda$ is an eigenvalue, you need to know that there exists an eigenvector associated to that number.2017-02-27
  • 1
    The definition should be thought of as simultaneously defining eigenvalues and eigenvectors, which are naturally paired up.2017-02-27

0 Answers 0