3
$\begingroup$

Here's my Homework Problem:

We can generalize the least squares method to other polynomial curves. To find the quadratic equation $y=a x^2+b x+c$ that best fits the points $(-1, −3)$, $(0, 0)$, $(1, −1)$, and $(2, 1)$, we first write the matrix equation $AX=B$ that would result if a quadratic equation satisfied by all four points did indeed exist. (The third equation in this system would correspond to $x=1$ and $y= −1$: $a+b+c = −1$.) We proceed by writing the normal system $A^T A X=A^T B$.

Use elementary row operations to find the equation of the quadratic equation that best fits the given four points. Enter the (exact) value of y(1) on the quadratic regression curve.

So far I have a solvable matrix using $A^T A = A^T B$

$ \left( \begin{array}{rrr} 18 & 8 & 6 \\ 8 & 6 & 2 \\ 6 & 2 & 4 \end{array} \right) \left( \begin{matrix} a \\ b \\ c \end{matrix} \right) = \left( \begin{array}{r} 6 \\ 4 \\ -3 \end{array} \right) $

Normally,

$ \left( \begin{matrix} a & b \\ c & d \end{matrix} \right)^{-1} = \frac{1}{a d - b c} \left( \begin{array}{rr} d & -b \\ -c & a \end{array} \right) \> . $

How does this law translate from $2 \times 2$ matrices to $3 \times 3$?

  • 0
    Note that you don't have to compute the inverse to solve the system. You can use row reduction instead.2011-04-23

1 Answers 1

4

The "translation" is given through the use of the adjugate matrix (Cramer's Rule); you can see the result in Wikipedia's page on matrix inverses.

But it's simpler to find the inverse using elementary row operations: put your matrix on the left and the identity on the right, and row reduce until the left side is the identity; what you have on the right will be the inverse: $\begin{align*} \left(\begin{array}{rrr|rrr} 18 & 8 & 6 & 1 & 0 & 0\\ 8 & 6 & 2 & 0 & 1 & 0\\ 6 & 2 & 4 & 0 & 0 & 1 \end{array}\right) &\to \left(\begin{array}{rrr|rrr} 0 & 2 & -6 & 1 & 0 & -3\\ 8 & 6 & 2 & 0 & 1 & 0\\ 6 & 2 & 4 & 0 & 0 & 1 \end{array}\right) \\ &\to\left(\begin{array}{rrr|rrr} 0 & 2& -6 & 1 & 0 & -3\\ 2 & 4 & -2 & 0 & 1 & -1\\ 6 & 2 & 4 & 0 & 0 & 1 \end{array}\right)\\ &\to \left(\begin{array}{rrr|rrr} 0 & 2& -6 & 1 & 0 & -3\\ 2 & 4 & -2 & 0 & 1 & -1\\ 0 & -10 & 10 & 0 & -3 & 4 \end{array}\right) \\ &\to \left(\begin{array}{rrr|rrr} 0 & 10 & -30 & 5 & 0 & -15\\ 2 & 4 & -2 & 0 & 1 & -1\\ 0 & -10 & 10 & 0 & -3 & -4 \end{array}\right)\\ &\to\left(\begin{array}{rrr|rrr} 0 & 0 & -20 & 5 & -3 & -11\\ 2 & 4 & -2 & 0 & 1 & -1\\ 0 & -10 & 10 & 0 & -3 & 4 \end{array}\right) \\ &\to \left(\begin{array}{rrr|rrr} 2 & 4 & -2 & 0 & 1 & -1\\ 0 & -10 & 10 & 0 & -3 & 4\\ 0 & 0 & -20 & 5 & -3 & -11 \end{array}\right)\\ &\to\left(\begin{array}{rrr|rrr} 20 & 40 & -20 & 0 & 10 & -10\\ 0 & -20 & 20 & 0 & -6 & 8\\ 0 & 0 & -20 & 5 & -3 & -11 \end{array}\right) \\ &\to \left(\begin{array}{rrr|rrr} 20 & 40 & 0 & -5 & 13 & 1\\ 0 & -20 & 0 & 5 & -9 & -3\\ 0 & 0 & -20 & 5 & -3 & -11 \end{array}\right)\\ &\to\left(\begin{array}{rrr|rrr} 20 & 0 & 0 & 5 & -5 & -5\\ 0 & -20 & 0 & 5 & -9 & -3\\ 0 & 0 & -20 & 5 & -3 & -11 \end{array}\right) \\ &\to\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & \frac{1}{4} & -\frac{1}{4} & -\frac{1}{4}\\ 0 & 1 & 0 & -\frac{1}{4} & \frac{9}{20} & \frac{3}{20}\\ 0 & 0 & 1 & -\frac{1}{4} & \frac{3}{20} & \frac{11}{20} \end{array}\right)\end{align*}$ So the inverse of your matrix is $\left(\begin{array}{rrr} \frac{1}{4} & -\frac{1}{4} & -\frac{1}{4}\\ -\frac{1}{4} & \frac{9}{20} & \frac{3}{20}\\ -\frac{1}{4} & \frac{3}{20} & \frac{11}{20} \end{array}\right).$

  • 0
    @ J.M.: We learned two other techniques. One using multi variable calculus where you use first partial to minimize the sum of the squares. The other using summation of x^2, x, n, xy, and y. But my professor specifically asked for this problem to be solved using the Matrix method.2011-04-24