I was reviewing the proof the remainder estimate for a Taylor series expansion and I came across something I can't find an intuitive explanation for: if you have a function f that's bounded on an interval $[a-s, a+s]$ and define $f_1(x) := \int_a^xf(t)\,\mathrm dt$ and define $f_n(x) := \int_a^x f_{n-1}(t)\,\mathrm dx$, then $\lim_{n\to\infty}f_n(x) = 0.$
Can anyone explain why or how this is the case on an intuitive level?
Also if I try this iteration using $f(x) = \cos(x)$, at each iteration I get either $\sin(x)$ or $\cos(x)$ and a part of its Taylor expansion with the next iteration resulting in a better estimate.