147
$\begingroup$

What are your favorite applications of integration by parts?

(The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!)

Thanks for your contributions, in advance!

  • 49
    It can also be a good career move. A (likely apocryphal) story goes: when Peter Lax was awarded the National Medal of Science, the other recipients (presumably non-mathematicians) asked him what he did to deserve the Medal. Lax responded: "I integrated by parts."2011-04-24
  • 3
    Great story, Willy.2011-04-28
  • 20
    Two more stories: 1. Supposedly when Laurent Schwartz received the Fields Medal (for his work on distributions, of course), someone present remarked, "So now they're giving the Fields Medal for integration by parts." 2. I believe I remember reading -- but have no idea where -- that someone once said that a really good analyst can do marvelous things using only the Cauchy-Schwarz inequality and integration by parts. I do think there's some truth to that.2011-10-11
  • 4
    More physics, but it's useful in the derivation of the [Euler-Lagrange equation](http://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation#Statement), which itself is very nice.2013-01-12
  • 3
    @WillieWong Your comment is quoted in the book "Physics from Symmetry" https://books.google.de/books?id=_vLLCQAAQBAJ&pg=PA256&lpg=PA256&dq=A+%28likely+apocryphal%29+story+goes:+when+Peter+Lax+was+awarded+the&source=bl&ots=TbPF0vhSni&sig=tnQ4yvjLZ2yso8ch_TnJ9NayylY&hl=de&sa=X&ei=QE2FVeyLAsWWsAHgwIXIBw&ved=0CCoQ6AEwAQ#v=onepage&q=A%20%28likely%20apocryphal%29%20story%20goes%3A%20when%20Peter%20Lax%20was%20awarded%20the&f=false2015-06-20
  • 0
    @WillieWong: I can't understand your comment clearly. What's the good career move ?2016-04-05
  • 0
    @ArkaKarmakar: It == "Integrating by parts".2016-04-08
  • 0
    @WillieWong: I am stupid than most of the users here, I still can't understand how integrating by part helped Peter Lax.2016-04-08
  • 1
    @ArkaKarmarkar: Peter Lax intimated that the work he did that led to his being awarded the National Medal of Science was essentially integration by parts. If you win a National Medal of Science then this helps your career as a mathematician.2016-04-08

19 Answers 19

131

I always liked the derivation of Taylor's formula with error term:

$$\begin{array}{rl} f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\ &= f(0) + xf'(0) + \int_0^x tf''(x-t)\,dt\\ &= f(0) + xf'(0) + \frac{x^2}2f''(0) + \int_0^x \frac{t^2}2 f'''(x-t)\,dt \end{array}$$

and so on. Using the mean value theorem on the final term readily gives the Cauchy form for the remainder.

  • 24
    The error term in integral form is just badass.2012-02-23
  • 0
    I'm certainly missing something obvious but don't we have: $\int_0^x f'(x-t) \,dt= f(x-t) \Big |_0^x=f(x-x)-f(x-0)=f(0)-f(x)$, which is wrong by a minus sign?2014-11-13
  • 7
    @JakobH Note that the integration variable $t$ has a minus sign, $f(x-t)$.2014-11-14
96

My favorite this week, since I learned it just yesterday: $n$ integrations by parts produces $$ \int_0^1 \frac{(-x\log x)^n}{n!}dx = (n+1)^{-(n+1)}.$$ Then summing on $n$ yields $$\int_0^1 x^{-x}\,dx = \sum_{n=1}^\infty n^{-n}.$$

  • 0
    Also very cute. Thanks!2011-02-05
  • 13
    Sophomore's dream.2014-07-28
91

Let $f$ be a differentiable one-to-one function, and let $f^{-1}$ be its inverse. Then,

$$\int f(x) dx = x f(x) - \int x f'(x)dx = x f(x) - \int f^{-1}(f(x))f'(x)dx = x f(x) - \int f^{-1}(u) du \,.$$

Thus, if we know the integral of $f^{-1}$, we get the integral of $f$ for free.

BTW: This is the reason why the integrals $\int \ln(x) dx \,;\, \int \arctan(x) dx \,; ...$ are always calculated using integration by parts.

  • 3
    Among other things: this is one way to derive the indefinite integral of the Lambert function $W(x)$, the inverse of $x\exp\,x$.2011-10-11
  • 7
    I'd write the last integral as $\left.\int f^{-1}(u)\> du\right|_{u:=f(x)}$ or similar.2011-10-11
  • 19
    It's a very nice exercise to derive this identity _geometrically_, by considering both integrals as areas....2011-10-11
  • 0
    I'd enjoy this answer a lot more if we had some startpoints and/or endpoints on those integrals.2015-08-06
  • 1
    @goblin Any result which is true for the indefinite integrals becomes trivially true when the end points are added.... Why would you enjoy more a particular case then the general one? ;)2015-08-06
  • 0
    @GregMartin, would you mind shedding some light on how one can do that?..2017-10-17
  • 1
    @MrReality https://upload.wikimedia.org/wikipedia/commons/5/59/FunktionUmkehrIntegral2.svg2017-10-18
  • 0
    @N.S. thanks for sharing that!2017-10-18
85

Repeated integration by parts gives $$\int_0^\infty x^n e^{-x} dx=n!$$

  • 51
    ...which is dual to $\sum_{n\ge0} x^n/n! = e^x$.2011-02-04
  • 3
    @Mitch, is this duality a consequence of any deeper facts? How does it generalize, if at all?2011-04-25
  • 5
    @Skatche: Excellent point, especially since [I asked exactly that question immediately after I posted that comment](http://math.stackexchange.com/questions/20441/factorial-and-exponential-dual-identities). So see that link for discussion.2011-04-25
45

High brow: Let $f(\theta)$ be a smooth function from the circle to $\mathbb{R}$. The Fourier coefficients of $f$ are given by $a_n = 1/(2 \pi) \int f(\theta) e^{-i n \theta} d \theta$.

Integrating by parts: $$a_n = \frac{1}{n} \frac{i}{2 \pi} \int f'(\theta) e^{- i n \theta} d \theta = \frac{1}{n^2} \frac{-1}{2 \pi} \int f''(\theta) e^{- i n \theta} d \theta = \cdots$$ $$\cdots = \frac{1}{n^k} \frac{i^k}{2 \pi} \int f^{(k)}(\theta) e^{- i n \theta} d \theta = O(1/n^k)$$ for any $k$.

Thus, if $f$ is smooth, it Fourier coefficients die off faster than $1/n^k$ for any $k$. More generally, if $f$ has $k$ continuous derivatives, then $a_n = O(1/n^k)$.

  • 0
    Very nice! Thanks for this one.2011-02-05
31

As with Taylor's Theorem, the Euler-Maclaurin summation formula (with remainder) can be derived using repeated application of integration by parts.

Tom Apostol's paper "An Elementary View of Euler's Summation Formula" (American Mathematical Monthly 106 (5): 409–418, 1999) has a more in-depth discussion of this. See also Vito Lampret's "The Euler-Maclaurin and Taylor Formulas: Twin, Elementary Derivations" (Mathematics Magazine 74 (2): 109-122, 2001).

27

Highbrow: Derivation of the Euler-Lagrange equations describing how a physical system evolves through time from Hamilton's Least Action Principle.

Here's a very brief summary. Consider a very simple physical system consisting of a point mass moving under the force of gravity, and suppose you know the position $q$ of the point at two times $t_0$ and $t_f$. Possible trajectories of the particle as it moved from its starting to ending point correspond to curves $q(t)$ in $\mathbb{R}^3$.

One of these curves describes the physically-correct motion, wherein the particle moves in a parabolic arc from one point to the other. Many curves completely defy the laws of physics, e.g. the point zigs and zags like a UFO as it moves from one point to the other.

Hamilton's Principle gives a criteria for determining which curve is the physically correct trajectory; it is the curve $q(t)$ satisifying the variational principle

$$\min_q \int_{t_0}^{t_f} L(q, \dot{q}) dt$$ subject to the constraints $q(t_0) = q_0, q(t_f) = q_f$. Where $L$ is a scalar-valued function known as the Lagrangian that measures the difference between the kinetic and potential energy of the system at a given moment of time. (Pedantry alert: despite being historically called the "least" action principle, really instead of minimizing we should be extremizing; ie all critical points of the above functional are physical trajectories, even those that are maxima or saddle points.)

It turns out that a curve $q$ satisfies the variational principle if and only if it is a solution to the ODE $$ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} + \frac{\partial L}{\partial q} = 0,$$ roughly equivalent to the usual Newton's Second Law $ma-F=0$, and the key step in the proof of this equivalence is integration by parts. What is remarkable here is that we started with a boundary-value problem -- given two positions, how did we get from one to the other? -- and ended with an ODE, an initial-value problem -- given an initial position and velocity, how does the point move as we advance through time?

26

Perhaps not really an application, but the definition of the derivative of a distribution is based on partial integration:

if $u\in C^1(X)$ and $\phi\in C^\infty_c(X)$ is a test function, then

$\left<\partial_i u,\phi\right>=\int\phi\partial_i u=-\int u\partial_i\phi=-\left$ by partial integration.

Extending this, for a distribution $u$ we then define its derivative $\partial_i u$ by this formula.

  • 1
    I find "applications" like this intriguing. No need for any apologetic tone here. Thanks for the answer!2011-02-05
24

My favorite example is getting an asymptotic expansion: for example, suppose we want to compute $\int_x^\infty e^{-t^2}\cos(\beta t)dt$ for large values of $x$. Integrating by parts multiple times we end up with $$ \int_x^\infty e^{-t^2}\cos(\beta t)dt \sim e^{-x^2}\sum_{k=1}^\infty(-1)^n\frac{H_{k-1}(x)}{\beta^k} \begin{cases} \cos(\beta x) & k=2n \\ \sin(\beta x) & k=2n+1 \end{cases}$$ where the Hermite polynomials are given by $H_n(x) = (-1)^ne^{x^2}\frac{d^n}{dx^n}e^{-x^2}$.

This expansion follows mechanically applying IBP multiple times and gives a nice asymptotic expansion (which is divergent as a power series).

23

Highbrow: Integration by parts can be used to compute (or verify) formal adjoints of differential operators. For instance, one can verify, and this was indeed the proof I saw, that the formal adjoint of the Dolbeault operator $\bar{\partial}$ on complex manifolds is $$\bar{\partial}^* = -* \bar{\partial} \,\,\, *, $$ where $*$ is the Hodge star operator, using integration by parts.

22

A lowbrow favorite of mine:

$$\int \frac{1}{x} dx = \frac{1}{x} \cdot x - \int x \cdot\left(-\frac{1}{x^2}\right) dx = 1 + \int \frac{1}{x} dx$$

Therefore, $1=0$.

A bit more highbrow, I like the use of partial integration to establish recursive formulas for integrals.

  • 0
    Hm, this example does not depend on integration by parts so much as it depends on not keeping track of the limits of integration.2011-02-04
  • 0
    It's true that the crux of the problem is not so much in the integration by parts, but if you integrate in a different way (what way, by the way?) you won't have that problem.2011-02-04
  • 22
    @Greg: Actually, it's not the limits of integration that matter here, but the *constant* of integration. $\int \frac{1}{x}\,dx$ is the entire family of antiderivatives, which is exactly the same as the family you get if you add $1$ to every member of the family.2011-02-04
  • 15
    Perhaps a more direct proof, using the same idea: $1 = \sin^2 x + \cos^2 x = \int \frac{d}{dx} (\sin^2 x + \cos^2 x)dx = \int (2 \sin x \cdot \cos x - 2 cos x \cdot \sin x)dx = \int 0 dx = 0$.2011-04-16
  • 0
    @Raskolnikov, "*but if you integrate in a different way (what way, by the way?) you won't have that problem*"$-$ what way(BTW)?2017-10-17
  • 0
    @Mr Reality: I've written that comment so long ago. I think I just meant with the antiderivative $\ln x$. That's all. Which is just knowing the relationship between $\ln x$ and $1/x$.2017-10-17
21

My favorite example of integration by parts (there are other nice tricks as well in this example but integration by parts starts it off) is this:

Let $I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^n(x) dx$.

$I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-1}(x) d(-\cos(x)) = -\sin^{n-1}(x) \cos(x) |_{0}^{\frac{\pi}{2}} + \int_{0}^{\frac{\pi}{2}} (n-1) \sin^{n-2}(x) \cos^2(x) dx$

The first expression on the right hand side is zero since $\sin(0) = 0$ and $\cos(\frac{\pi}{2}) = 0$.

Now rewrite $\cos^2(x) = 1 - \sin^2(x)$ to get

$I_n = (n-1) (\displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-2}(x) dx - \int_{0}^{\frac{\pi}{2}} \sin^{n}(x) dx) = (n-1) I_{n-2} - (n-1) I_n$.

Rearranging we get $n I_n = (n-1) I_{n-2}$, $I_n = \frac{n-1}{n}I_{n-2}$.

Using this recurrence we get $$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3} I_1$$

$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} I_0$$

$I_1$ and $I_0$ can be directly evaluated to be $1$ and $\frac{\pi}{2}$ respectively and hence,

$$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3}$$

$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} \frac{\pi}{2}$$

  • 1
    This is what is usually called a [reduction formula](http://en.wikipedia.org/wiki/Integration_by_reduction_formulae)2011-02-05
  • 11
    This is also called Wallis formula/product I believe.2011-02-05
  • 0
    @Aryabhata Yes. This would've been more interesting is he showed how to get it. It's not too hard.2012-02-23
  • 0
    @PeterT.off: Are you talking about the infinite version? He did show the finite version.2012-02-23
  • 0
    @Aryabhata I've never see the Wallis finite product. I always seen Walli's *infinite* product. I guess it'd be better to at least hint what $\dfrac{I_{2k+1}}{I_{2k}}$ is, and that it tends to 1.2012-02-23
  • 0
    @PeterT.off: Even the finite one is called Wallis product. Not just the infinite. For the question as asked, this answer is sufficient I guess and comments should be enough for anyone curious enough.2012-02-23
  • 0
    @Aryabhata I'm not saying the answer is not enough, just saying that when infinity comes into play, things get interesting.2012-02-23
  • 0
    @PeterT.off: I agree. It is one of my favourites! A proof is here: http://crypto.stanford.edu/pbc/notes/pi/wallis.xhtml2012-02-23
  • 0
    @Aryabhata If you're interested, I have proofs for the Poisson integral and Strilings Formula via the Wallis product.2012-02-23
  • 0
    @PeterT.off: (And apologies to Sivaram), those I believe have already appeared on this site: http://math.stackexchange.com/questions/23814/how-best-to-explain-the-sqrt2-pi-n-term-in-stirlings2012-02-23
13

$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$

\begin{align}{\large% \int_{-\infty}^{\infty}{\sin^{2}\pars{x} \over x^{2}}\,\dd x} &= \left.-\,{\sin^{2}\pars{x} \over x}\right\vert_{-\infty}^{\infty} + \int_{-\infty}^{\infty}{2\sin\pars{x}\cos\pars{x} \over x}\,\dd x = \int_{-\infty}^{\infty}{\sin\pars{2x} \over x}\,\dd x \\[3mm]&={\large% \int_{-\infty}^{\infty}{\sin\pars{x} \over x}\,\dd x} \end{align}

  • 0
    I am wondering about the last step in the integral. How can we change the 2x to x in the argument of the sine function? Does this have to do with the infinite limits?2018-10-07
  • 1
    @Saudman97 It's equivalent to the change $\displaystyle t \equiv 2x$ such that $\displaystyle{\sin\left(2x\right) \over x}\,\mathrm{d}x$ goes over $\displaystyle{\sin\left(t\right) \over t}\,\mathrm{d}t$ and $\displaystyle x \to \pm\infty \implies t \to \pm\infty$, respectively. As $\displaystyle x$ and $\displaystyle t$ are "mute variables" you can still use $\displaystyle x$ instead of $\displaystyle t$ in the last integral.2018-10-07
10

Integrating by parts is the how one discovers the adjoint of a differential operator, and thus becomes the foundation for the marvelous spectral theory of differential operators. This has always seemed to me to be both elementary and profound at the same time.

9

This is one of many integration by-parts applications/derivation I like.

And here is one:

The Gamma Distribution

A random variable is said to have a gamma distribution with parameters ($\alpha,\lambda),~\lambda\gt 0,~\alpha\gt 0,~$ if its density function is given by the following

$$ f(x)= \begin{cases} \frac{\lambda e^{-\lambda\:x}(\lambda x)^{\alpha-1}}{\Gamma(\alpha)}~~~\text{for }~x\ge 0 \\ \\ 0 \hspace{1.09in} {\text{for }}~x\lt 0 \end{cases} $$

where $\Gamma(\alpha),$ called the gamma function is defined as

$$ \Gamma(\alpha) = \int_{0}^{\infty} \! e^{-y} y^{\alpha-1}\, \mathrm{d}y $$

Integration of $\Gamma(\alpha)$ yields the following

$$ \begin{array}{ll} \Gamma(\alpha) &=\; -e^{-y} y^{\alpha-1} \Bigg|_{0}^{\infty}~+~\int_{0}^{\infty} \! e^{-y} (\alpha-1)y^{\alpha-2}\,\mathrm{d}y \\ \\ \;&=\; (\alpha-1) \int_{0}^{\infty} \! e^{-y} y^{\alpha-2}\,\mathrm{d}y ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1) \\ \\ \;&=\; (\alpha-1) \Gamma(\alpha-1) \end{array} $$

For integral values of $\alpha,$ let's say, $\alpha=n,$ we will obtain, by applying Equation ($1$) repeatedly,

\[ \begin{array}{llll} \Gamma(n)&=(n-1)\Gamma(n-1) \\ &=(n-1)(n-2)\Gamma(n-2) \\ &=\ldots \\ &=(n-1)(n-2)\ldots3~\cdot~2\Gamma(1) \end{array} \]

Since $\Gamma(1)=\int_{0}^{\infty} \! e^{-x}~\mathrm{d}x=1,$ it follows that, for integral values of n,

\[ \Gamma(n)=(n-1)! \]

Hope you enjoy reading $\ldots$ :)

8

Lowbrow: $\int\sin(x)\cos(x)dx=\sin^2x-\int\sin(x)\cos(x)dx+C$.

Finding the unknown integral again after integrating by parts is an interesting case. Solving the resulting equation immediately gives the result $\int\sin(x)\cos(x)dx=\dfrac12\sin^2x$

  • 2
    On the other hand, spotting $\frac12\sin(2x) = \sin(x)\cos(x)$ eliminates any difficulty with integration.2011-02-04
  • 4
    ...or use the substitution $t=\sin x$.2011-02-04
  • 9
    Like the waitress said, *"plus a constant!"* (e.g., see: http://preposterousuniverse.blogspot.com/2004/07/is-that-riemann-or-lebesgue-integral.html)2011-02-04
  • 0
    + C (that's it but I have to type more)2011-04-14
7

Lowbrow: $\int e^x\sin x\ dx$ and its ilk.

  • 13
    This can be done more efficiently using integration of complex functions: integrate $e^{x(1+i)}$, which is trivial, then take the imaginary part.2011-10-11
7

There are a couple of applications in PDEs that I am quite fond of. As well as verifying that the Laplace operator $-\Delta$ is positive on $L^2$, I like the application of integration by parts in the energy method to prove uniqueness.

Suppose $U$ is an open, bounded and connected subset of $\mathbb{R}^n$. Introduce the BVP \begin{equation*} -\Delta u=f~\text{in}~U \end{equation*} with initial position $f$ on the boundary $\partial U$. Suppose $v\in C^2(\overline{U})$ and set $w:=u-v$ such that we can establish a homogeneous form of our equation. Then an application of integration by parts gives us \begin{equation*} 0=-\int_U w\Delta wdx=\int_U \nabla w\cdot \nabla wdx-\int_{\partial U}w\frac{\partial w}{\partial\nu}dS=\int_U|\nabla w|^2dx \end{equation*} with outward normal $\nu$ of the set $U$. By establishing that $\nabla w=0$, we can then conclude uniqueness of the solution in $U$.

5

Really simple but nice:

$\int \log (x) dx = \int 1 \cdot \log(x)dx = x \log(x) - \int x d(\log(x))=x (\log(x)-1) $

also:

$ \int \frac{\log^k(x)}{x}dx = \int \log^k(x)d \log(x)=\frac{\log^{k+1}(x)}{k+1} $