To extend Bill's answer: as I mentioned here, one can use the Jordan decomposition instead of the eigendecomposition when computing the sine of a matrix (or a cosine, or any matrix function, really). One thus needs a method for computing the sine for scalars and Jordan blocks; for the Jordan block
$\mathbf J=\begin{pmatrix}\lambda&1&&\\&\lambda&\ddots&\\&&\ddots&1\\&&&\lambda\end{pmatrix}$
with (algebraic) multiplicity $k$ (i.e., $\mathbf J$ is a $k\times k$ matrix), the applicable formula is
$f(\mathbf J)=\begin{pmatrix}f(\lambda)&f^\prime(\lambda)&\cdots&\frac{f^{(k-1)}(\lambda)}{(k-1)!}\\&f(\lambda)&\ddots&\vdots\\&&\ddots&f^\prime(\lambda)\\&&&f(\lambda)\end{pmatrix}$
(see here for a proof). Since we have the neat chain of derivatives
$\sin^{(p)}(x)=\begin{cases}\sin\,x&\text{if }p\bmod 4=0\\\cos\,x&\text{if }p\bmod 4=1\\-\sin\,x&\text{if }p\bmod 4=2\\-\cos\,x&\text{if }p\bmod 4=3\end{cases}$
or more simply, $\sin^{(p)}(x)=\sin\left(x+p\frac{\pi}{2}\right)$, it is quite easy to compute the sine of a Jordan block.
In inexact arithmetic, the Jordan decomposition is exceedingly difficult to compute stably. One has to employ different methods in this case. One way is to replace the Jordan decomposition with a Schur decomposition; there is a method due to Beresford Parlett for computing matrix functions for general triangular matrices, but I won't discuss that further, and instead concentrate on a different evaluation method. The key is that the sine and cosine satisfy a neat duplication formula:
$\begin{align*}\sin\,2x&=2\sin\,x\cos\,x\\\cos\,2x&=2\cos^2 x-1\end{align*}$
Thus, just as in the scalar case, one can employ argument reduction: halve the matrix a sufficient number of times until its norm is tiny (and remember the number of times $m$ this halving was done), evaluate some good approximation for sine and cosine at that transformed matrix (either truncated Maclaurin series or Padé does nicely, with a slight edge for Padé approximants), and finally apply the two duplication formulae in tandem $m$ times to arrive at the sine and cosine of the starting matrix. This has more details.
Nick Higham discusses quite a lot on these methods in his book; see that chapter I linked to and the references therein.