My professor gave $a_n=\frac{1}{n\log n}$ for $n \geq 2$ as such an example but I can't understand why this is true. I am more accepting that the series $\sum_{n=2}^{\infty}\frac{1}{n\log n}$ is actually convergent rather than divergent. Here is my reasoning:
Note that $\sum_{n=2}^{10}\frac{1}{n\log n}$ is some finite number.
Note that $\sum_{n=2}^\infty \frac{1}{n\log n} = \sum_{n=2}^{10}\frac{1}{n\log n} + \sum_{n=11}^{\infty}\frac{1}{n\log n}$.
But, $n\log n > n$ for $n>10$. I can let $n\log n =n^p$ where $p>1$. Then, using the p-series, $\sum_{n=11}^{\infty}\frac{1}{n\log n} = \sum_{n=11}^\infty \frac{1}{n^p}$ converges to some finite number.
Therefore, $\sum_{n=2}^\infty \frac{1}{n\log n} = \sum_{n=2}^{10}\frac{1}{n\log n} + \sum_{n=11}^\infty \frac{1}{n\log n}$ must converge. $\blacksquare$
My professor said this example was to give us an intuition that $p$-series will work for any $p>1$ even though $p=1.0000000000000001$, or $p$ is greater than $1$ by very very small amount.