Here's another solution, which is obviously not as elegant as and a lot more complicated than the one in the other answers, but perhaps serves as a good example that a) in mathematics you can often do things in quite different ways and (sometimes somewhat miraculously) end up with the same result and b) which of these different ways you choose can make a big difference in how much work you have to do to get the result.
We can model the dealing as a Markov process. The state is given by the number $i$ of hands not yet seen. With each deal, in state $i$ the state is left unchanged with probability $i/n$ and a transition from state $i$ to state $i-1$ occurs with probability $1-i/n$, where again $n$ is the number of possible hands. So for example for $n=5$ we'd have the following transition matrix:
$\left(\begin{array}{ccccc} 1&\frac15&0&0&0&0\\ 0&\frac45&\frac25&0&0&0\\ 0&0&\frac35&\frac35&0&0\\ 0&0&0&\frac25&\frac45&0\\ 0&0&0&0&\frac15&1\\ 0&0&0&0&0&0\\ \end{array}\right)\;.$
If we number the rows and columns starting with $0$, this matrix has eigenvectors $x_k=(-1)^i\binom ki$ for $k=0,\dotsc,n$ and corresponding eigenvalues $\lambda_k=1-k/n$ (where the binomial coefficient is zero if the bottom argument is greater than the top one). Our initial state has probability $1$ in state $n$ and $0$ in all other states, and this has the expansion
$\delta_{in}=\sum_{k=0}^n(-1)^k\binom nkx_k=\sum_{k=0}^n(-1)^{k+i}\binom nk\binom ki$
in terms of the eigenvectors of the transition matrix. The final state has $i=0$, so the probability of having seen all hands at time $t$ is the corresponding component of the state vector at time $t$. Each eigenvector is multiplied by its eigenvalue in each deal, so the probability to be in the state $i=0$ at time $t$ is given by
$p_t=\sum_{k=0}^n(-1)^k\binom nk\lambda_k^t=p_t=\sum_{k=0}^n(-1)^k\binom nk\left(1-\frac kn\right)^t\;.$
The probability of not being done yet at time $t$ is
$1-p_t=-\sum_{k=1}^n(-1)^k\binom nk\left(1-\frac kn\right)^t\;,$
and the average waiting time is the sum of all these probabilities:
$ \begin{eqnarray} T &=& \sum_{t=0}^\infty(1-p_t) \\ &=& \sum_{t=0}^\infty\left(-\sum_{k=1}^n(-1)^k\binom nk\left(1-\frac kn\right)^t\right) \\ &=& -\sum_{k=1}^n(-1)^k\binom nk\sum_{t=0}^\infty\left(1-\frac kn\right)^t \\ &=& -\sum_{k=1}^n(-1)^k\binom nk\frac nk\;. \end{eqnarray} $
To evaluate this, consider
$\sigma(q)=\sum_{k=1}^n(q-1)^k\binom nk\frac1k\;.$
Differentiating with respect to $q$ yields
\begin{eqnarray} \sigma'(q) &=& \sum_{k=1}^n(q-1)^{k-1}\binom nk \\ &=& \frac1{q-1}\left(\sum_{k=0}^n(q-1)^k\binom nk-1\right) \\ &=& \frac{(q-1+1)^n-1}{q-1} \\ &=& \frac{q^n-1}{q-1} \\ &=& \sum_{i=0}^{n-1}q^i \;, \end{eqnarray}
and so
$\sigma(q)=\sum_{i=1}^{n}\frac{q^i}i+C\;.$
The integration constant $C$ can be recovered from $\sigma(1)=0$:
$\sigma(q)=\sum_{i=1}^{n}\frac{q^i}i-\sum_{i=1}^{n}\frac1i\;,$
and thus we obtain
$T=-n\sigma(0)=n\sum_{i=1}^{n}\frac1i\;,$
in agreement with the other answers.