For the => direction, if $A$ is diagonalizable, then it's easy to finish off your argument: Let $x_1, \ldots, x_m$ be a basis of $V$ consisting of eigenvectors for $A$, then since $C$ is nonzero, some $Cx_i$ must be nonzero, so your argument shows that $\lambda$ is an eigenvalue of $B$.
If $A$ is not diagonalizable then it's trickier. One way is to use generalized eigenvectors: A non-zero vector $v \in V$ is a generalized eigenvector of $A$ if for some positive integer $k$ and scalar $\lambda$, we have $(A-\lambda I)^k v = 0$. The scalar $\lambda$ is always an eigenvalue if such an equation holds. The key fact about generalized eigenvectors is that for every matrix $A$, there is a basis for $V$ consisting of generalized eigenvectors of $A$.
Take a basis $x_1, \ldots, x_m$ of generalized eigenvectors of $A$, with corresponding eigenvalues $\lambda_1, \ldots, \lambda_m$. Then, as before, some $Cx_i$ is nonzero, and a short computation using the condition $CA=BC$ shows that $(B-\lambda I)^k C = C (A-\lambda I)^k$ Using this, we conclude that $Cx_i$ is a generalized eigenvector of $B$ and therefore $\lambda_i$ is an eigenvalue of $B$.
For the <= direction, again the diagonalizable case is easy: Take a common eigenvalue $\lambda$ and map an eigenvector of $A$ to an eigenvector of $B$, just as you did above. The rest of the basis of eigenvectors of $A$, you send to 0. You can check that $CA=BC$ on the eigenvectors basis.
The general case is done by using generalized eigenvectors. Take a common eigenvalue $\lambda$. Now we have to be careful: We can't map an eigenvector $A$ to an eigenvector of $B$. (Let $A$ be $\left( \begin{matrix} 1 & 1\cr 0 & 1 \end{matrix} \right)$ and $B = I$; show that every matrix $C$ satisfying $CA=BC$ must take the unique eigenspace of $A$ to zero.) Consider the generalized eigenspaces $V_{\mu} = \{v \in V : (A - \mu I)^k v = 0 \mbox{ for some positive integer } k\}.$ The space $V$ is the direct sum of all the $V_{\mu}$. For $\mu \not = \lambda$, you send $V_{\mu}$ to 0. The tricky part is what to do with $V_{\lambda}$ itself.
Since we're just worrying about $V_{\lambda}$ now, we can replace $V$ by $V_\lambda$, thus we may assume that the $\lambda$ is the only eigenvalue of $A$. Let $k$ be the smallest positive integer such that $(A-\lambda I)^k v = 0$ for all $v \in V$. For lack of a better term, let's call $k$ the index of $A$ for $V$. We proceed by induction on $k$. If $k=1$ we are in the diagonalizable case.
If $k>1$, let $v$ be an eigenvector of $A$ for $\lambda$. Then $A$ fixes the space $\langle v \rangle$ and therefore acts on the quotient $\overline{V} = V/\langle v \rangle$. Now we can see that $(A-\lambda I)^{k-1}$ kills $\overline{V}$ (since otherwise $(A - \lambda I)^k$ would not kill $V$). Hence the index of $A$ in $\overline{V}$ is less than $k$. Inductively, we have a nonzero map $\overline{C} : \overline{V} \to W$ such that $\overline{C} A = B \overline{C}$, and by composing with the projection from $V$ to $\overline{V}$, we get a nonzero map $C : V \to W$ satisfying $CA = BC$.
EDIT: I don't think the fact that the index of $A$ in $\overline{V}$ is smaller than $k$ is as trivial as I made it sound above. You need to look at the Jordan blocks of $A$. Or, instead of induction on $k$, just note that $\dim \overline{V} < \dim V$ and use induction on $\dim V$ instead.