I don't really know about the general parameters of your problems at hand; but possibly my method using Pari/GP allows rather flexible work with up to 128 (or:why not 160) terms of a series and my standard precision is 200 dec digits. I've built a set of routines to express functional iteration as matrix-operations, and a flexible handling for that iterations (sometimes even fractional) by diagonalization, if applicable. If I encounter series with coefficients of 1e100, but signs are alternating then I can -in the same environment- use Euler-summation to arrive at results anyway.
In principle, what I'm doing is to construct a vector A1 containing the leading, say 128, coefficients of a power series of a function under consideration. Then I create the same way the vectors $\small A_2,A_3, \ldots $ having the coefficients of the according powers of the function, easily done by a standard call in Pari/Gp, but can also easily programmed. For the zeroth power it is just the vector $\small A_0= [ 1 ,0,0,...] $ up to the selected dimension. Then all the vectors are concatenated to a matrix $\small A = [A_0, A_1,A_2, \ldots A_n] $ (Such matrices are called Carleman-Matrix btw)
With a vector $V(x)$ which contains the consecutive powers of x (or its current value) I get then by the call $ \small V(x) \cdot A $ the result $\small V(f(x)) $ to a certain approximation.
Then function composition reduces down do matrix-multiplication $\small V(x) \cdot A \cdot B$ gives $\small V(g(f(x))$ if $\small B$ is the Carlemanmatrix of the function $\small g(x)$ , and iteration to matrix-powers $\small f(f(x)) = V(x) \cdot A^2 $ . In Pari/GP this can be done to a certain simple extent also with symbolic coefficients (but then not big matrix-size, say up to 64 x 64 at most.)
Unfortunately I don't have mathematica, so I can't be helpful with this, but I could post my couple of basic routines to show how this is done in principle.
There is a lot of finetuning, whereabouts, types of possible functions etc so I really do not know, how far this all is meaningful for your current problem at all.
[added] With a small set of basic matrices and the concept of matrix-decomposition this formalism allows a very handy upn-like environment for manipulating/analyzing powerseries. Having the basic algebraic operations at hand:
Operation of increment, or when repeated of addition, equivalents recentering of power series. It uses the Pascalmatrix P and its powers as Carlemanoperator
$ \qquad \small \begin{eqnarray} V(x) \cdot P^\tau &=& V(x+1) \\ V(x) \cdot (P^\tau)^{-1} &=& V(x-1) \\ V(x) \cdot (P^\tau)^y &=& V(x+y) \\ \end{eqnarray}$
Operation of multiplication, equvalents rescaling of powerseries, when a small prefix d indicates the diagonal-matrix use of a vector
$ \qquad \small \begin{eqnarray} V(x) \cdot dV(y) &=& V(x*y) \\ V(x) \cdot dV(y)^h &=& V(x*y^h) \\ \end{eqnarray}$
Operation of exponentiation, basically uses the matrix of Bell-polynomials, and logarithmizing using the factorially rescaled matrix of Stirling numbers second and first kind respectively
$ \qquad \small \begin{eqnarray} V(x) \cdot fS2F &=& V(\exp(x)-1) \\ V(x) \cdot fS1F &=& V(\log(1+x)) \\ \end{eqnarray}$
one can then - just like with an upn-calculator - define the operation for the "square-root" Carleman-matrix for $\small f(x) = \sqrt{x} $ by
$ \qquad \small \begin{eqnarray} V(x) &\cdot& {P^\tau} ^{-1} &=& V(x-1) \\ V(x-1) &\cdot& fS1F &=& V(\log(x)) \\ V(\log(x)) &\cdot& dV(1/2) &=& V(\log(x)/2) \\ V(\log(x)/2) &\cdot& fS2F &=& V(\exp(\log(x)/2)-1) \\ V(\exp(\log(x)/2)-1) &\cdot& P^\tau &=& V(\exp(\log(x)/2)) \\ \end{eqnarray} $
all together
$\qquad \small(V(x) \cdot {P^\tau}^{-1}) \cdot (fS1F \cdot dV(1/2)\cdot fS2F \cdot P^\tau )= V(\sqrt x ) $
somehow like $\small x \circ \text{DEC } \circ \text{LOGPLUS } \circ \text{MUL } 1/2 \circ \text{EXPMINUS } \circ \text{INC } \equiv x \circ \text{SQRT } $
and one shall see, that extracting the middle part into a separate matrix $\small SQ = fS1F \cdot dV(1/2) \cdot fS2F $ provides the carleman-matrix for $\small V(x)\cdot SQ = V(\sqrt{1+x}-1) $ with the expected coefficients for that function.
So to have a set of basic matrix-constants and basic procedures that is a toolbox for a very flexible handling of composition and iteration of functions which allow representation as power series.
Even more: if one needs fractional arguments, like with addition of fractional y in the formula above, one can introduce the general method of fractional powers of that matrices using the matrix-logarithm or diagonalization; especially the fractional power of P for fractionally-often increments is easy to define/implement.
All the above gives only an overview of the idea; there are some caveats, since we deal with matrix-products of matrices, which are ideally of infinite size, and for instance the partial product in the above formula for the sqrt of x, $\small {P\tau}^{-1} fS1F$ cannot be taken out and be made explicite (the associativity "breaks") due to occuring singularities. However, taking $\small V(x) \cdot {P^\tau}^{-1} =V(x-1) $ first allows to proceed - which means nothing else than that for the definition of power series for certain operations we need the recentering essentially .
[end additions]
For advanced handling with that problematic there is online-available material of R.P.Brent, who has in-depth analyzed composition of power series in its algorithmical implementation. I found for instance R.P.Brent "Fast Algorithms for Manipulating formal Power series" in "Journal of the Association for Computing Machinery, Vol 25, No 4, October 1978, pp 581-595" or "on the complexity of composition (...)" in "Siam J. Comput. Vol. 9, No.1 Feb 1980" and a handful related articles, but Brent's books should be present in many math-dept. libraries.