I'll assume that the method we're using is Euler's method in this answer, but the same ideas apply to other methods as well.
Perhaps the easiest to understand is "consistency".
Even if $z_n$ were exact, so that $z_n = y(t_n)$, $z_{n+1}$ would still only be an approximation of $y(t_{n+1})$. A certain amount of error would be introduced in that single step.
We hope that this error introduced in a single step is small. And not just small, but small compared to the step size $h$. In other words, if $e(h)$ is defined by
\begin{equation} y(t_{n+1}) = y(t_n) + h f(t_n,y(t_n)) + e(h), \end{equation} then we hope that $e$ is $o(h)$ as $h \to 0$: \begin{equation} \frac{e(h)}{h} \to 0 \text{ as } h \to 0. \end{equation} If this property holds for all $t_n$, the method is said to be "consistent". People often use Taylor series to show that a method is consistent.
Here is some intuition: with a step size of $h$, the total number of steps we take is something like $\frac{1}{h}$. At each step, the error may increase a little, but we can hope the amount that the error increases is just $o(h)$. If so, then when we add up all the increases in the error, we see that the final error is $\frac{o(h)}{h}$, which approaches $0$ as $h \to 0$. (If the error introduced in a single step were even smaller, say $o(h^3)$ for example, then the situation would be even better -- what I'm calling the "final error" would then be $o(h^2)$.)
What can go wrong with a consistent method? Yes, it is true that if $z_k$ were exact, then $z_{k+1}$ would differ from $y_{k+1}$ by just $o(h)$, which is nice. But $z_k$ is not exact, there is a certain amount of error already. And if that error is somehow magnified a lot from one step to the next, then $z_{k+1}$ may differ from $y_{k+1}$ by a lot. This is why a method must be "stable" as well as "consistent" in order for the final approximation to converge to the correct value as $h \to 0$. This intuitive idea of stability can be made precise (different definitions are useful in different situations), and then we can prove theorems to the effect that "consistency + stability implies convergence".