How do I solve the following PDE for it's general solution? $ {\partial u \over \partial t} - k {\partial ^2 u \over \partial x^2} =0$ How do I determine the general the solution of this equation will be $u(x, t) = X(x)T(t) $? I tried Monge's method but couldn't get it. My textbook only deals with particular solution using boundary condition.
how to solve $ {\partial u \over \partial t} - k {\partial ^2 u \over \partial x^2} =0$
-
0$Hans Lundmark: my answer has been already reach your effect. – 2012-09-03
3 Answers
It is a classical problem. I recommand you to consult a book on advanced engineering mathematics where you can find all the details you need. Lets try to work it out.
$ \mathrm{PDE}\quad u_{t}(x,t) = k u_{xx}(x,t) $
$ \mathrm{B.C} \quad u(0,t) = 0 \,, \quad u(L,t)=0 \,.$
$ \mathrm{I.C} \quad u(x,0) = f(x) \,.$
Denote the above equations by $(S)$
The technique is based on the method of separation of variables. It works as, assume the solution has the form $u(x,t)=X(x)T(t) \,.$ Compute the derivatives
$ u_t(x,t) = X(x)T'(t)\,, \quad u_{xx}(x,t) = T''(x)T(t)\,, $
then substitute back in the differential equation to get
$ X(x)T'(t) = k X''(x)T(t) \Rightarrow \frac{T'(t)}{kT(t)} = \frac{X''(x)}{X(x)} \,.$
It is clear from the last equation that the left hand side depends only on $t$ and the right hand side depends only on x. This means that the equality holds only if both sides are equal to the same constant. Therefor, we get
$ \frac{T'(t)}{kT(t)} = \frac{X''(x)}{X(x)}=-\lambda^2 \rightarrow (*) \,.$
The selection of $ -\lambda^2 $, and not $ \lambda^2 $, in the above equation is the only selection for which non trivial solutions exist. It is clear that $(*)$ gives two distinct ordinary differential equations namely,
$ T'(t) +k\lambda^2 T(t) = 0 \rightarrow (1)\,,$ $ X''(x) + \lambda^2 X(x) = 0 \rightarrow (2) \,. $
Solving the first ordinary differential equation $(1)$ gives
$ T'(t) = C {\rm e}^{-k\lambda^2}t \rightarrow (**) \,, $
where $C$ is a constant. On the otherhand, the function $ X(x) $ can be found by solving the second order linear ordinary differential equation $(2)$,
$ X(x)= A \cos(\lambda x) + B \sin( \lambda x ) \rightarrow (***) \,, $
where $A$ and $B$ are constants. Determining the constants $A$, $B$, and $\lambda$ depend on the homogeneous boundary conditions
$ u(0,t) = 0 \,,$ $ u(L,t)=0 \,.$
Substituting the above initial conditions in $ u(x,t) = X(x)F(t) $,yields,
$ X(0)T(t) = 0 \Rightarrow X(0) = 0\,. $ $ X(L)T(t) = 0 \Rightarrow X(L) = 0 \,. $
Using $X(0)=0$ into $(***)$ leads to
$ A = 0 \implies X(x) = B \sin(\lambda x ) \longrightarrow ($) \,.$
Substituting the condition $ X(L) = 0 $ into $ ($) $ gives $ B\sin(\lambda L) = 0 \implies B=0 \quad \mathrm{or}\quad \sin(\lambda L)=0 \,. $
Since $B=0$ gives the trivial solution $ u(x,t)=0 $ \,, we have
$ \sin(\lambda L) = 0 \rightarrow \lambda_nL= n \pi \rightarrow \lambda_n= \frac{n \pi}{L}\,,\quad n=1,2,3,\dots \,. $
Since $ n = 0 $ gives the trivial solution. It will be excluded. In view of the infinite number of eigenvalues, we write, $ X_n(x) = \sin(\frac{n\pi}{L}x)\,, \quad T_n(t) = {\rm e}^{-k{(\frac{n\pi}{L})}^2\,t } \,. $
Forgetting about the constants $B$ and $C$, the functions
$ u(x,t) = X_n(x)T_n(t) = \sin(\frac{n\pi}{L}x)\, {\rm e}^{-k{(\frac{n\pi}{L})}^2\,t } \,, n=1,2,3,\dots $
are called the fundamental solutions that satisfy the (PDE) and the given boundary conditions.
Recalling the super position principle, a linear combination of the fundamental solutions also satisfies the given equation and the boundary conditions. Hence, we have
$ u(x,t) = \sum_{n=1}^{\infty} \alpha_n {\rm e}^{-k{(\frac{n\pi}{L})}^2 t }\sin(\frac{n\pi}{L}x)\, $
where $\alpha_n$ are constants to be determined. To determine $ \alpha_n $, we appeal to the initial condition $u(x,0)=f(x)$ and substitute in the last equation, we get
$ u(x,0) = f(x) = \sum_{n=1}^{\infty} \alpha_n \sin(\frac{n\pi}{L}x) \,.$
Comparing the above equation with the Fourier series of a function, one can see that this series is nothing but the Fourier series of $f(x)$ which implies that
$ \alpha_n = \frac{2}{L}\int_{0}^{L} f(x) \sin(\frac{n\pi}{L}x)\,dx \,.$
Since the $\alpha_n$ have been determined, the particular solution $u(x,t)$ follows immediately,
$ u(x,t) = \sum_{n=1}^{\infty} \left( \frac{2}{L}\int_{0}^{L} f(x) \sin(\frac{n\pi}{L}x)\,dx \right) {\rm e}^{-k{(\frac{n\pi}{L})}^2 t }\sin(\frac{n\pi}{L}x) \,. $
-
0@MonkeyD.Luffy: Monge's method is for non linear second order differential equations. While your PDE is the heat equation which is a linear PDE. – 2012-09-02
I am not sure that my answer is an answer to the question. I assume that the OP would like to understand why the general solution of the heat equation is the one we all study in PDE courses, which is found by separation of variables. This question is far from trivial: we are given an equation and we conjecture that its solution(s) can be written as a product of two functions of two independent variables. Why? Well, the answer is never given in first courses on PDEs. And the answer is that there is no reason why this should be really true. As Evans writes in his book on Partial Differential Equations, it is useful to find particular solutions, for example the so-called fundamental solution. But there is no reason why this should be the most general solution of the given equation. For the heat equation, there is no universal uniqueness theorem. Evans shows that uniqueness holds among solutions that grow at most like $e^{x^2}$, but he refers to another book for the construction of infinitely many solutions of the heat equation that grow even faster than $e^{x^2}$.
In my opinion, the expression "general solution" should be carefully avoided, when studying PDEs. Only for a very small number of equations does a complete classification of solutions exist. A lot of things depend on boundary conditions: an equation may have infinitely many solution if a Neumann boundary condition is prescribed, and only a constant solution if a Dirichlet condition is prescribed.
-
1My "answer" does not criticize the use of separation of variables at all. I know that it is useful to construct explicit solutions to PDEs. The point is that in general we are not sure that *the* solution we are constructing is the *general* solution, as the OP was asking. I mean: it is a solution that was found by some *ansatz*, and only a uniqueness theorem can state that our ansatz was the only admissible ansatz. Unluckily, this kind of uniqueness theorems may not exist at all. – 2012-09-03
Case $1$: $\text{Re}(kt)\geq0$
Let $u(x,t)=X(x)T(t)$ ,
Then $X(x)T'(t)-kX''(x)T(t)=0$
$X(x)T'(t)=kX''(x)T(t)$
$\dfrac{T'(t)}{kT(t)}=\dfrac{X''(x)}{X(x)}=-(f(s))^2$
$\begin{cases}\dfrac{T'(t)}{T(t)}=-k(f(s))^2\\X''(x)+(f(s))^2X(x)=0\end{cases}$
$\begin{cases}T(t)=c_3(s)e^{-kt(f(s))^2}\\X(x)=\begin{cases}c_1(s)\sin(xf(s))+c_2(s)\cos(xf(s))&\text{when}~f(s)\neq0\\c_1x+c_2&\text{when}~f(s)=0\end{cases}\end{cases}$
$\therefore u(x,t)=C_1x+C_2+\int_sC_3(s)e^{-kt(f(s))^2}\sin(xf(s))~ds+\int_sC_4(s)e^{-kt(f(s))^2}\cos(xf(s))~ds$ or $C_1x+C_2+\sum_sC_3(s)e^{-kt(f(s))^2}\sin(xf(s))+\sum_sC_4(s)e^{-kt(f(s))^2}\cos(xf(s))$
Case $2$: $\text{Re}(kt)\leq0$
Let $u(x,t)=X(x)T(t)$ ,
Then $X(x)T'(t)-kX''(x)T(t)=0$
$X(x)T'(t)=kX''(x)T(t)$
$\dfrac{T'(t)}{kT(t)}=\dfrac{X''(x)}{X(x)}=(f(s))^2$
$\begin{cases}\dfrac{T'(t)}{T(t)}=k(f(s))^2\\X''(x)-(f(s))^2X(x)=0\end{cases}$
$\begin{cases}T(t)=c_3(s)e^{kt(f(s))^2}\\X(x)=\begin{cases}c_1(s)\sinh(xf(s))+c_2(s)\cosh(xf(s))&\text{when}~f(s)\neq0\\c_1x+c_2&\text{when}~f(s)=0\end{cases}\end{cases}$
$\therefore u(x,t)=C_1x+C_2+\int_sC_3(s)e^{kt(f(s))^2}\sinh(xf(s))~ds+\int_sC_4(s)e^{kt(f(s))^2}\cosh(xf(s))~ds$ or $C_1x+C_2+\sum_sC_3(s)e^{kt(f(s))^2}\sinh(xf(s))+\sum_sC_4(s)e^{kt(f(s))^2}\cosh(xf(s))$
Hence $u(x,t)=\begin{cases}C_1x+C_2+\int_sC_3(s)e^{-kt(f(s))^2}\sin(xf(s))~ds+\int_sC_4(s)e^{-kt(f(s))^2}\cos(xf(s))~ds&\text{when}~\text{Re}(kt)\geq0\\C_1x+C_2+\int_sC_3(s)e^{kt(f(s))^2}\sinh(xf(s))~ds+\int_sC_4(s)e^{kt(f(s))^2}\cosh(xf(s))~ds&\text{when}~\text{Re}(kt)\leq0\end{cases}$ or $\begin{cases}C_1x+C_2+\sum_sC_3(s)e^{-kt(f(s))^2}\sin(xf(s))+\sum_sC_4(s)e^{-kt(f(s))^2}\cos(xf(s))&\text{when}~\text{Re}(kt)\geq0\\C_1x+C_2+\sum_sC_3(s)e^{kt(f(s))^2}\sinh(xf(s))+\sum_sC_4(s)e^{kt(f(s))^2}\cosh(xf(s))&\text{when}~\text{Re}(kt)\leq0\end{cases}$
This is already the general solution of $\dfrac{\partial u}{\partial t}-k\dfrac{\partial^2u}{\partial x^2}=0$ . Note that when without any I.C.s, the form of $f(s)$ can choose arbitrary, but when I.C.s are given, the form of $f(s)$ and the choice whether using the integration kernel or using the summation kernel should choose wisely in order to accommodate the I.C.s to get the most nice form of the solution, especially the number of I.C.s is more than two.
Another brilliant method is called the power series method.
Similar to PDE - solution with power series:
Let $u(x,t)=\sum\limits_{n=0}^\infty\dfrac{(x-a)^n}{n!}\dfrac{\partial^nu(a,t)}{\partial x^n}$ ,
Then $u(x,t)=\sum\limits_{n=0}^\infty\dfrac{(x-a)^{2n}}{(2n)!}\dfrac{\partial^{2n}u(a,t)}{\partial x^{2n}}+\sum\limits_{n=0}^\infty\dfrac{(x-a)^{2n+1}}{(2n+1)!}\dfrac{\partial^{2n+1}u(a,t)}{\partial x^{2n+1}}=\sum\limits_{n=0}^\infty\dfrac{(x-a)^{2n}}{k^n(2n)!}\dfrac{\partial^nu(a,t)}{\partial t^n}+\sum\limits_{n=0}^\infty\dfrac{(x-a)^{2n+1}}{k^n(2n+1)!}\dfrac{\partial^{n+1}(a,t)}{\partial t^n\partial x}=\sum\limits_{n=0}^\infty\dfrac{f^{(n)}(t)(x-a)^{2n}}{k^n(2n)!}+\sum\limits_{n=0}^\infty\dfrac{g^{(n)}(t)(x-a)^{2n+1}}{k^n(2n+1)!}$