3
$\begingroup$

When using Gaussian elimination to solve a system of linear equations , we can add a multiple of one equation to another and have an equivalent equation

In other words , If we have a system of three linear equations L1 , L2 and L3 we can multiply L1 by a nonzero multiple and add it to L2 to get an equivalent equation to L2

why is that valid ?

  • 0
    @Akram Please do not write comments as answers.2012-09-24

4 Answers 4

3

If we have two equations $A$ and $B$, multiplying $A$ by $k$ and adding it to $B$ gives the equation $kA+B$. Conversely, if we have $A$ and $kA+B$, then subtracting $kA$ from $kA+B$ gives $B$. This means that the system with equations $A$ and $B$ and the system with equations $A$ and $kA+B$ are equivalent. Note that this argument works even when $k=0$, in which case there is no change.

For example, if $A$ is $a_1x_1+...+a_nx_n=x$ and $B$ is $b_1x_1+...+b_nx_n=y$, then $A+kB$ is simply $(a_1+kb_1)x_1+...+(a_n+kb_n)x_n=x+ky$.

  • 0
    [[If we have two equations A and B, multiplying A by k and adding it to B gives the equation kA+B. Conversely, if we have A and kA+B, then subtracting kA from kA+B gives B. This means that the system with equations A and B and the system with equations A and kA+B are equivalent ]] WHY ???2012-09-23
1

Say $x$ is such that $f(x) = g(x) = 0$ i.e. $x$ solves the equations $f$ and $g$. Let $h = f + \lambda g$ be a linear combination of the two equations. Notice that $h(x) = 0 + \lambda0 = 0$. This proves that every solution of $f, g$ is also a solution of $h, g$. Finally notice that $f = h - \lambda g$ and thus, by the same argument, every solution of $h, g$ is also a solution of $f, g$.

  • 0
    By definition, systems of equations are equivalent if they have the same roots. That is what I prove. You'll have to point out what exactly is unclear.2012-09-24
0

If $f(x) = a$ and $g(x) = b$, then $c g(x) = c \cdot b$, and $\begin{align*}f(x) + c g(x) &= a + c g(x) \\&= a + c\cdot b,\end{align*}$ by nothing more than substitution. As to equivalence, this follows from the fact that you can just "re-subtract" the equation $cg(x) = cb$ again.

  • 0
    Instead of adding $cg(x) = cb$, simply subtract $cg(x)=cb$. The result will once again be the equation $f(x)=a$.2012-09-23
0

Let's consider the system $ \begin{pmatrix} 1 & 1 \\ 2 & 3\end{pmatrix}\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 5 \\ 8\end{pmatrix}.$

The shortcut step in Gaussian elimination tells you to subtract 2 times the first row from the second, yielding $ \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 5 \\ -2\end{pmatrix}.$

It's not so hard to see that this system is equivalent, but what are we actually doing behind the handwaving?

What's really happening is that we're left multiplying both sides of the equation by a new matrix:

$\begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 2 & 3\end{pmatrix}\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix} \begin{pmatrix} 5 \\ 8\end{pmatrix}.$

You might notice that in this case, the new matrix is lower triangular, and when we carry out the Gaussian elimination step, the resulting matrix is upper triangular.

In the $2 \times 2$ case, there is a single sub-diagonal element in any matrix. You may also notice that the Gaussian elimination requires only a single "step". If we extend this to $3 \times 3$ matrices, you will notice that you need to do 3 computations -- and there happen to be three sub-diagonal elements in any $3\times 3$ matrix. This is not a coincidence. It turns out, in fact, that Gaussian elimination is innately linked to LU decomposition -- the transformation of a full-rank square matrix into the product of lower and upper triangular matrices. This is very useful, because triangular matrices are spectacularly easy to invert.

The shorthand method for Gaussian elimination gives you the following system: $LAx = Lb.$

But we could just as easily obtain $Ax=b \Longrightarrow LUx = b \Longrightarrow Ux = L^{-1}b.$

The key difference as to which operation to choose is whether you're trying to invert $A$ or solve $Ax=b$. For solving $Ax=b$, the shorthand Gaussian elimination turns out to be easiest.