Here is the intuitive story behind Robert Israel's answer:
The following general principle has been invented by Newton: Any reasonable given or unknown function $x\mapsto f(x)$ defined in a neighborhood of $x=0$, in particular any "analytical expression" like, e.g., $e^x$, can be developed into a power series of the form $\sum_{k=0}^\infty a_k x^k$, where the $a_k$ are real (or complex) constants. This means that there is a $\rho>0$ such that $f(x)=\sum_{k=0}^\infty a_kx^k\qquad\bigl(|x|<\rho\bigr)\ .$ These power series behave in a simple way under addition and multiplication (particularly by polynomials in $x$), and above all, under differentiation. Even composition ("plugging" one series into an other) can be handled, however the computations get more involved in this case.
When an initial value problem $y'(x)=F\bigl(x,y(x)\bigr),\quad y(0)=y_0\qquad(*)$ is given, where the right side is some "analytical expression" $F(x,y)$ in the two variables $x$ and $y$, then only in rare cases it is possible to guess and subsequently verify some expression $x\mapsto y(x)$ that solves this problem. But it is always allowed to develop everything in sight into a power series in $x$, the unknown solution $y(x)$ with undetermined coefficients $a_k$, and in the majority of cases one then obtains a recursion formula for the $a_k$, with starting value $a_0=y_0$.
In the end one has the solution of $(*)$ in the form $y(x)=\sum_{k=0}^\infty a_kx^k$, and this series is hopefully convergent for all $x$ with $|x|<\rho$ for some $\rho>0$. In this way one has an at least numerically feasible hold on the solution, even though there may be no "elementary" expression for this solution available.