0
$\begingroup$

I'm watching an MIT lecture on linear approximations: https://youtu.be/BSAA0akmPEU?t=28m30s.

The lecturer states that if we get a quadratic term in computing a linear approximation, then we should drop the quadratic term. The justification he gave was that, since we've been excluding quadratic terms all along, we should also drop the quadratic terms if they arise during the process of computation. However, I don't find this answer satisfying or substantive.

Also, wouldn't the quadratic term improve our approximation?

I'd greatly appreciate it if someone could explain the reasoning behind why we drop quadratic terms (if they arise) during the computation of linear approximations.

Thank you.

  • 2
    Yes a quadratic term improves your approximation (if you're doing the approximating sensibly) but then it's not a linear one any more, so you lose the nice things about linear functions. You can add as many terms as you like, hell, you can go all the way to the full Taylor series (if it has one) to get a perfect approximation, but that's not what "linear" means.2017-01-10
  • 0
    @AdamHughes I see. What benefits do you lose by including the quadratic term?2017-01-10
  • 1
    Anything depending on the linearity? Ease of computing things like intercepts and other function properties, but the **entire** field of linear algebra is the biggest one, I would say.2017-01-10
  • 1
    @AdamHughes Ok, so you lose the simplicity that made linear approximations useful in the first place; this makes sense. Thank you.2017-01-10
  • 1
    My pleasure, glad I could help!2017-01-10

0 Answers 0