It's actually best, in my opinion at least, to use the usual polynomial expansion for $p$. Note that taking powers with a diagonalizable matrix is easy: $A^k = S\Lambda^k S^{-1}$ (by induction). Thus you can write $p(A) = \sum_{k=0}^n a_k A^k = \sum_{k=0}^n a_k S \Lambda^k S^{-1}=S(\sum_{k=0}^n a_k\Lambda^k)S^{-1}=S\,p(\Lambda)S^{-1}$. Finally, diagonal matrices in polynomials are easy: $p(\Lambda)=\mathrm{diag}(p(\lambda_1),p(\lambda_2),\dots,p(\lambda_n))$. But the eigenvalues of $A$ are preciesly the zeroes of $p(\cdot)$, so $p(\Lambda)=0$, and hence $p(A)=0$.
EDIT: If the problem specifically indicates for you to substitute the diagonalization in to the product, then that's what you should do. The idea behind that route is that the $(A-\lambda_iI)$'s can be refactored as $S(\Lambda-\lambda_i)S^{-1}$, making $p(A)=S(\Lambda-\lambda_1)(\Lambda-\lambda_2)\cdots(\Lambda-\lambda_n)S^{-1}$. Ignoring the outside factors, we know that the product of diagonal matrices is just the pairwise scalar product of their diagonal entries. Thus, using the product expression for $p(\cdot)$ in each of the diagonal entries, we find that $p(A)$ comes to $S\, \mathrm{diag}(p(\lambda_1),\cdots,p(\lambda_n))S^{-1}$, which is of course just $0$.