3
$\begingroup$

I would like to show that if $f\colon\mathbb{R}^n\to \mathbb{R}$ is differentiable and $f(0)=0$ that there exists $g_i\colon\mathbb{R}^n\to\mathbb{R}$ such that $f(x) = \sum_{i=1}^n x^i g_i(x)$.

The book (Spivak) has it written like that, but I believe it's more appropriate as $x_i g_i(x)$ (i.e. the $x_i$ are the components and not the powers,... I think?!)

I don't see why this should necessarily be true, if we don't assume that $f$ is linear. Just because $f(0)=0$ and is differentiable doesn't imply linearity.
For example if $f(x,y) = x\cdot y$ I don't see how we should split this into a sum of parts $g_1$ and $g_2$.

  • 2
    About your example: for the function $f(x_1,x_2)=x_1x_2$, a solution is $g_1(x_1,x_2)=x_2$ and $g_2(x_1,x_2)=0$.2012-05-01
  • 0
    You should try to put this in Latex. Meanwhile, it is fairly common to use superscripts as indices in any field related to differential geometry, going back to tensor conventions from physics over 100 years ago, I suppose. The hope in "raising ad lowering of indices" in a tensor is that vectors and covectors are distinguished by sight. I gather this is one of his basic books, though.2012-05-01
  • 1
    Hum okay.... I really didn't think of think the tensor notation would be in an analysis book.... regardless, its there. Anyway, I don't see how Didier's solution is correct, as persumably the x^i (x_i) are coefficients and not functions.... right? I guess I'm hung on linear transformations. Regardless, I don't see how my original statement is true.... do you know a proof, or something towards one. the hint given is: if h_x (t):=f(tx) then f(x)=integrate(h'_x (t) dt, 0,1). Sorry I've just started learning TeX a few days ago2012-05-01
  • 0
    The question doesn't even look appropriate..... how is it that $f:R^n \to R$ AND have components (if it has components, then it can't be a function to R)2012-07-12

3 Answers 3

1

Recall what you know about polynomials. If $f(0) = 0$, then you can factor a copy of $x$ out of the polynomial to get $f(x) = x g(x)$; we have some idea bout the "multiplicity" of a root which corresponds to how many time $x$ divides $f(x)$.

If $f(x,y,z)$ is a polynomial in three variables with $f(0,0,0) = 0$, then if you expand it, every term has to have at least one copy of $x$, $y$, or $z$, and so you can group the terms together as $f(x,y,z) = x g(x,y,z) + y h(y,z) + z k(z)$.

This doesn't carry over to arbitrary continuous functions; consider $\sqrt{|x|}$ or even just $|x|$.

However, differentiable functions behave well, to "first-order". Many of the facts regarding roots of polynomials (as long as you only consider multiplicity 1) still apply to differentiable functions. For twice differentiable functions, you can consider the sorts of things that happen to "second-order", and so forth.

One of the most excellent general methods here is the Taylor series. To the first order, it says that you can write any differentiable function as a linear polynomial, plus a remainder term that goes to zero more rapidly than linearly.

(in fact, it's so useful that Taylor series are often used to prove things about polynomials. In fact, the usual way of writing a polynomial can be viewed as a Taylor series)

I'll do the one-dimensional case for you. If $f(x)$ is a univariate function differentiable at zero, and $f(0) = 0$, then its first-order Taylor series (also known as "differential approximation") at zero shows we can write it as

$$ f(x) = f(0) + x f'(0) + x r(x) $$

where $r(x)$ is continuous at zero, and $r(0) = 0$. In particular, this means we can set $g(x) = f'(0) + r(x)$, and have an identity

$$ f(x) = x g(x). $$

4

I'm looking at Spivak's Calculus on Manifolds, he is very fond of superscripts. The use of superscripts instead of subscripts is quite standard in some topics...the first place you are likely to see this as a preference for most authors is in the definition of the "differential" of a function, $$ df = \frac{\partial f}{\partial x^1} dx^1 + \cdots + \frac{\partial f}{\partial x^n} dx^n $$ As I said,there are no exponents in this expression.

  • 0
    That's funny, especially since Spivak seems to reject Einstein summation conventions in _A Comprehensive Introduction..._2012-05-01
  • 0
    @Dylan, he's an enigma. What I am remembering is lectures by Chern, he had it all worked out, the downside is that all the formulas looked exactly the same. The famous one is his expression for $dp,$ where you need to figure out how to differentiate a point.2012-05-01
1

If $f$ is continuously differentiable, you can use Taylor's Theorem to get $$f(x) = f(0) + \int_0^1 \frac{\partial f(t x)}{\partial x} x \; dt$$ Since $f(0)=0$, you can let $g(x) = \int_0^1 \frac{\partial f(t x)}{\partial x} \; dt$, then you will have $f(x) = \sum_{i=1}^n x_i g_i(x)$.

If $f$ is just differentiable, you can use the mean value theorem, which, in this case, gives: $$\forall x \in \mathbb{R}^n, \exists c \in [0,x]\; \mathbb{such \; that} \; f(x) = \frac{\partial f(c)}{\partial x} x$$ While a little unsatisfactory, one can define $g(x) = \frac{\partial f(c)}{\partial x}$, where $c$ is chosen from the mean value theorem statement (smacks of axiom of choice!). However, now you need to do more work to get any useful properties of $g$.

  • 0
    We're not given continuous differentiability ... only differentiability and f(0)=0.2012-05-01
  • 0
    I added some more detail, although not quite as nice.2012-05-01