7
$\begingroup$

I am having trouble grokking why it is, assuming that the function is analytic everywhere (and many other assumptions that I am, no doubt, naively assuming), that this is true:

$f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]+\frac{1}{2!}[f''_{xx}(x_0,y_0)(x-x_0)+2f''_{yx}(x_0,y_0)(x-x_0)(y-y_0)+f''_{yy}(x_0,y_0)(y-y_0)^2]+...$

I am familiar with the one-variabled Taylor series, and intuitively feel why the 'linear' multivariable terms should be as they are.

In short, I ask for a proof of this equality. If possible, it would be nice to have an answer free of unnecessary compaction of notation (such as table of partial derivatives).

As a auxiliary question, I see a direct analogy with the first 2 terms $f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]$ and the total differential $f(x,y)-f(x_0,y_0)=\Delta f(x,y)=f'_x(x_0,y_0)\Delta x+f'_y(x_0,y_0)\Delta y$.

When $\Delta x $ and $\Delta y $ are not infinitesimally small, can I use the third term in the Taylor multivariable series to get closer to the real total differential?

5 Answers 5

14

Let $\phi(\boldsymbol{r})$ be a scalar field, and $\boldsymbol{a} \cdot \nabla \phi$ gives the directional derivative of $\phi$ in the direction of $a$. That is,

$\boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r}) = \lim_{t\to 0} \frac{\phi(\boldsymbol{r} + \boldsymbol{a} t) - \phi(\boldsymbol{r})}{t}$

Now let's consider $\Phi(t) = \phi(\boldsymbol{r}_0 + \boldsymbol{a}t)$ for some finite $t$. Now, let's expand this in powers of $t$. This is a one-dimensional Taylor series.

$\Phi(t) = \Phi(0) + \Phi'(0)t + \frac{1}{2!} \Phi''(0) t^2 + \ldots$

To substitute back in $\Phi(t) = \phi(\boldsymbol{r}_0+\boldsymbol{a}t)$, we must compute derivatives of $\Phi$ in terms of $\phi$. Again, we resort to the basic definition of the derivative.

$\Phi'(0) = \lim_{t\to 0} \frac{\phi(\boldsymbol{r}_0+\boldsymbol{a}t) - \phi(\boldsymbol{r}_0)}{t} = \boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r})\Big|_{\boldsymbol{r}=\boldsymbol{r}_0}$

And similarly for higher derivatives. This enables us to write,

$\phi(\boldsymbol{r}_0+\boldsymbol{a}t) = \phi(\boldsymbol{r}_0) + [\boldsymbol{a} \cdot \nabla \phi(\boldsymbol{r})] \Big|_{\boldsymbol{r}=\boldsymbol{r}_0} t + \frac{1}{2!} [\boldsymbol{a} \cdot \nabla][\boldsymbol{a} \cdot \nabla]\phi(\boldsymbol{r}) \Big|_{\boldsymbol{r}=\boldsymbol{r}_0} t^2 + \ldots$

It is not difficult to show that this form reproduces the form of the original question. Take $t=1$ and let $\boldsymbol{a} = (x-x_0, y-y_0)$ and $\boldsymbol{r}_0 = (x_0, y_0)$. Thus, we have built multivariate Taylor series from the well-established case of a single variable, just by use of the directional derivative.

  • 0
    Concise and understandable- excellent answer, thank you.2012-10-27
2

Let $u \in \mathbb{R}^m, \, h \in \mathbb{R}^m, \, t \in \mathbb{R},$ and $F(t)=f(u+th).$ Suppose that $F$ can be expanded into Taylor's series $F(t)=\sum\limits_{n=0}^{\infty}{\frac{1}{n!}}F^{(n)}(0)t^n.\tag{*}$ Taylor's expansion for $f$ can be obtained from $({}^{*})$ by differentiating $f$ and then put $t=1$.

For the case $n=2$ $f(u+h)=\sum\limits_{n=0}^{\infty}{{\frac{1}{n!}}d^{n}f(u)},$ where $u=(x, \, y)\quad h=(dx,\, dy),$ $d^{n}f(u)=\sum\limits_{k=0}^{n}{\binom{n}{k}}\frac{\partial^n{f}}{\partial{x}^k {}\partial{y}^{n-k}}dx^kdy^{n-k}.$

  • 0
    because the sequence of partial derivatives you take to get an nth order approximation could be anything you want. If there are two variables than the number of distinct sequences becomes the binomial coefficient. Another way to think of it is that in order to isolate the coefficiants we must take partial derivatives of y-y0)^r and x-x0)^(n-r), r, and n-r times respectively for both variables, then divide by the factorials we created in order to isolate the coefficiants. These factorial coefficiants can be nicely factored by matrices2017-07-30
0

If you know the definition of gradient vectors, you can actually get a more concise answer. You can check it out: http://www.math.ucdenver.edu/~esulliva/Calculus3/Taylor.pdf.

-3

Intuitively its quite clear: the multivariable analog of the first derivative is the gradient, which is exactly the second term evaluated at Ro. The second derivitave generalizes to the Hessian, which is best represented in matrix form, and that is your second term... The trick in deriving this is to define an s=R-R0 so g(s) = f(R-Ro) and use the chain rule/mean value theorem as usual.