My book gives the following definition:
Let $A$ be a $(n\times n)$-matrix with real components. A nonzero vector $v\in\mathbb R^n$ is an eigenvector of $A$ if there exists $\lambda\in\mathbb R$, such that $A\cdot v=\lambda v$. This value $\lambda$ is the eigenvalue that belongs to eigenvector $V$.
The way I interpret this definition, is that we are only given an eigenvalue, when we are given an eigenvector. But does a scalar being an eigenvalue imply there is an eigenvector?
How I see the definition: $$ v=\text{eigenvector}\implies\exists\lambda\in\mathbb R:A\cdot v=\lambda v. $$ But does the definition really say the following: $$ \lambda(\in\mathbb R)=\text{eigenvalue}\implies\exists v\in\mathbb R^n:A\cdot v=\lambda v? $$ I was thinking of using the contrapositive: assume that for each $v\in\mathbb R^n:A\cdot v\neq \lambda v$. This means that for each $v$, $\lambda$ is not an eigenvalue. Therefore $\lambda$ is not an eigenvalue.
Am I correct that I have to use this contrapositive? Or is the definition lacking?