I'm sure you are all familiar with partial fraction decomposition, but I seem to be having trouble understanding the way it works. If we have a fraction f(x)/[g(x)h(x)], it seems only logical that it can be "split up" into A/g(x) + B/h(x) for some A and B, because there must be some A and B that make this work, right? But I don't see why this wouldn't work when we have f(x)/[g(x)]^2. Couldn't we just split it like A/g(x) + B/g(x) for some A and B? I am missing something, but I don't know what. Could someone please enlighten me as to why my reasoning is incorrect? Thank you.
The existence of partial fraction decompositions
-
0Related: https://math.stackexchange.com/questions/743055/ – 2018-11-26
6 Answers
There is a simple closed formula for the partial fraction decomposition:
Let $X$ be an indeterminate. For any complex number $a$, any nonnegative integer $k$, and any rational fraction $f\in\mathbb C(X)$ defined at $a$, let $ \mathbb T_a^k(f):=\sum_{j=0}^k\frac{f^{(j)}(a)}{j!}(X-a)^j $ be the degree at most $k$ Taylor approximation of $f$ at $a$. (We have used the fact that $f'$ is defined at $a$ if $f$ is.)
Let $N,D\in\mathbb C[X]$ be polynomials. Assume $ D(X)=\big(X-a_1\big)^{m_1}\cdots\big(X-a_r\big)^{m_r}, $ where the $a_j$ are distinct and the $m_j$ positive. Suppose also $N(a_j)\neq0$ for all $j$. Let $f$ be the rational fraction $N/D$, and let $ \mathbb P_a(f):=\mathbb T_{a_j}^{m_j-1}\Big(f(X)\big(X-a_j\big)^{m_j}\Big)\big(X-a_j\big)^{-m_j} $ be the polar part of $f$ at $a_j$.
Claim A. We have $ f(X)=Q(X)+\sum_{j=1}^r\ \mathbb P_{a_j}(f)\tag1 $ for a unique polynomial $Q$.
If $m_j=1$ for all $j$, we get Lagrange's Interpolation Formula $ f(X)=Q(X)+\sum_{j=1}^r\ \frac{N(a_j)}{X-a_j}\ \prod_{k\not=j}\ \frac{1}{a_j-a_k}\quad. $
If $\deg N < \deg D$, then $Q=0$. Otherwise, putting $q:=\deg N-\deg D$, we have $ Q(X^{-1})=\mathbb T_0^q\Big(f(X^{-1})X^q\Big)X^{-q}. $ The above expressions for $f(X)$ are called partial fraction decomposition.
Proof of Claim A. Define $Q$ by $(1)$. Then $Q$ is a priori a rational fraction, and it suffices to show that it is in fact a polynomial. By the Fundamental Theorem of Algebra, it is enough to check that $Q$ is defined everywhere, and, to this end, we only need to verify that $f-\mathbb P_{a_j}(f)$ is defined at $a_j$. This will follow from Claim B below applied to $(X-a_j)^{m_j}(f-\mathbb P_{a_j}(f))$.
Claim B. If $a$ is a complex number and $g\in\mathbb C(X)$ a rational fraction defined at $a$ such that $\mathbb T_a^{k-1}(g)=0$, then $(X-a)^{-k}g(X)$ is defined at $a$.
Proof of Claim B. We can assume that $g$ is nonzero. Then we can write $f$ as $(X-a)^nh(X)$ with $h$ defined and nonzero at $a$, and it suffices to show $n\ge k$. This inequality follows from the easy fact that, for $j=1,\dots,n$, the rational fraction $f^{(j)}$ is of the form $(X-a)^{n-j}h_j(X)$ with $h_j$ defined and nonzero at $a$.
Another way of looking at Arturo's comments about unique factorization: the key to the standard partial-fraction decomposition is the existence of the Euclidean Algorithm for polynomials, along with the standard-but-unspoken assumption that $g(x)$ and $h(x)$ in your initial decomposition have no factor in common - that is, that their GCD is 1. If we can find a partial-fraction decomposition for $1\over g(x)h(x)$ as ${A(x)\over g(x)}+{B(x)\over h(x)}$, then we can certainly find one for $f(x)\over g(x)h(x)$ for any polynomial $f$; just multiply by $f(x)$ in that decomposition. Likewise, if we can find a partial-fraction decomposition for $f(x)\over g(x)h(x)$ for any polynomial $f$, then we can certainly find one for the special case $f(x)\equiv 1$. But saying that ${1\over g(x) h(x)} = {A(x)\over g(x)}+{B(x)\over h(x)}$ is the same as saying that ${A(x)h(x) + B(x)g(x) \over g(x)h(x)} = {1\over g(x)h(x)}$, or in other words saying that $A(x)h(x) + B(x)g(x) \equiv 1$, and the existence of polynomials $A$ and $B$ with these properties is exactly what the (extended) Euclidean algorithm provides. It doesn't work for your case of $f(x)\over g^2(x)$ because $GCD(g(x),g(x)) \neq 1$, so there can't be any $A(x)$ and $B(x)$ satisfying $A(x)g(x)+B(x)g(x) \equiv 1$.
-
0@Bill: Yeah, your answer appeared while I was working on mine; I do think (hope!) the added details help, though... – 2011-06-27
No, in general you cannot split it that way. For example, $\frac{x}{(x+1)^2}$ cannot be written as $\frac{A}{x+1} + \frac{B}{x+1}$ with $A$ and $B$ constants. Because you would just get $\frac{A}{x+1} + \frac{B}{x+1} = \frac{A+B}{x+1},$ with $A+B$ constant, so this is definitely not equal to $\frac{x}{(x+1)^2}$.
If you have $\frac{A}{g(x)} + \frac{B}{g(x)}$ then the answer is just $\frac{A+B}{g(x)},$ just like with fractions, you have $\frac{A}{n} + \frac{B}{n} = \frac{A+B}{n}.$ So unless $\frac{f(x)}{(g(x))^2}$ can be simplified by cancelling one of the factors of $g(x)$, you have no hope of writing it as a sum of two fractions with $g(x)$ in the denominator.
(Somewhat hidden in the above is the fact that polynomials also have unique factorization, so if $\frac{f(x)}{g(x)} = \frac{h(x)}{k(x)}$ then $f(x)k(x) = g(x)h(x)$. If $f(x)$ and $g(x)$ have no common factors and $h(x)$ and $k(x)$ have no common factors, then the equality means that you must have $g(x)=k(x)$ and $f(x) = h(x)$ up to multiplication by constants; again, just like an equality of fractions $\frac{a}{b} = \frac{c}{d}$ with $a,b,c,d$ all integers, and $a$ and $b$ have no common factors and $c$ and $d$ have no common factors, then up to sign you must have $a=c$ and $b=d$.)
-
1Unique factorization plays no role above. $\rm\:f/g^2 = h/g \iff f = g\:h\:$ is true in *any* ring where $\rm\:g\:$ is a unit. – 2011-06-27
HINT $\rm\displaystyle\qquad \frac{1}{g\:h}\ =\ \frac{a}g\ +\ \frac{b}h\ \iff\ 1\ =\ a\ h\: +\: b\ g\ \iff\ 1\: =\: \gcd(g,h)$
Well, of course you could split it up as $ \frac{f(x)}{(g(x))^2} = \frac{A}{g(x)}+\frac{B}{g(x)} $ (with non-constant $A$ and $B$) but what you find wouldn't be very useful, since any A and B that satisfy $ A + B = \frac{f(x)}{g(x)} $ are a solution. And if it were easy to solve that, you would divide by $g(x)$ first without trying to get partial fractions (also the solutions to that are never unique).
Instead, by splitting it up as $ \frac{f(x)}{(g(x))^2} = \frac{A}{g(x)}+\frac{B}{(g(x))^2} $ You get your solutions to: $ A (g(x))^2 + B g(x) = f(x) $ Which will usually give you unique terms for polynomial $f,g$.
If you have $\frac {f(x)}{g(x)^2h(x)}$ the standard partial fraction decomposition would be:
$\frac {a(x)}{g(x)} + \frac{b(x)}{g(x)^2} + \frac {c(x)}{h(x)} + d(x)$
The assumption would be that $g(x)$ and $h(x)$ are coprime, and most applications I've seen have them irreducible. Then the degrees of $a(x)$ and $b(x)$ are less than the degree of $g(x)$ and the degree of $c(x)$ is less than the degree of $h(x)$.
Of course the fractions involving $g(x)$ can be put over a common denominator of $g(x)^2$, but the point of the standard decomposition is that the degrees of the numerators are less than the degree of $g(x)$ and not that of $g(x)^2$.
Similar observations apply if there is more than one squared term in the denominator, or if higher powers are involved.