Suppose $x_0,x_1,\ldots,x_n$ are $n+1$ distinct numbers in the interval $[a,b]$ and $f\in C^{n+1}[a,b]$. Then for each $x$ in $[a,b]$, there is a number $\xi$ in $(a,b)$ such that
$f(x) = P(x) + \frac{f^{(n+1)}(\xi)}{(n+1)!}(x-x_0)(x-x_1)\cdots(x-x_n)$
where $P(x)$ is the Lagrange interpolating polynomial of degree at most $n$ with $f(x_k)=P(x_k)$.
I am trying to understand geometrically why the remainder term $f(x)-P(x)$ should have the form given above. I am looking for a conceptual argument similar to the following for the lagrange form of the taylor remainder. Let $R(x) = g(x) - g(0) - g^{(1)}(0)x- \frac{g^{(2)}(0)}{2} x^2 - \cdots - \frac{g^{(k)}(0)}{k!}x^k$ be the taylor remainder of $g(x)$. For a fixed $h>0$ let $p(x)= \frac{R(h)}{h^{k+1}}x^{k+1}$ Then we have $R(0)=p(0), R^{(1)}(0)=p^{(1)}(0), \ldots, R^{(k)}(0)=p^{(k)}(0)$ and $p(h) = R(h)$. In addition $p^{(k+1)}(x)$ is constant. If $R^{(k+1)}(x)$ were always strictly greater than the constant $p^{(k+1)}$ then since $R$ and $p$ agree on initial conditions at $x=0$ we would expect geometrically that $R$ to be greater than $p$ after $x=0$ contradicting $p(h) = R(h)$. Likewise if $R^{(k+1)}(x)$ were always strictly less than the $p^{(k+1)}$ we would expect $R$ to be less than $p$ after $x=0$ contradicting $p(h) = R(h)$. Thus we expect that $R^{(k+1)}(x)$ takes on values above and below $p^{(k+1)}$ and as well as $p^{(k+1)}$ itself. And this gives the lagrange form of the taylor remainder. Of course the standard formal argument would use the generalized form of Rolle's theorem, but I didn't need Rolle's theorem to see why the lagrange form of the taylor remainder should be right. There should be a similar "geometric" argument to motivate the error term for the lagrange interpolation polynomial.