3
$\begingroup$

I am a complete newbie when it comes to advanced mathematics, and am trying to learn calculus on my own. I wanted to know - is it possible to calculate the first derivative if you don't know the function that created a curve, but you DO have all of the points along the curve?

Edit: I created the curve using a cubic Spline interpolation

If so, can you point me to a place where I can learn how this would be accomplished?

Thanks!!

  • 0
    If you used (cubic) splines to define the curve then you _do have_ (the expression defining) the function.2017-01-12

4 Answers 4

2

If you have the curve, then geometrically, that is all you need to find a derivative value at a given point. You could estimate the direction of the tangent line at a given $x=a$. The slope of that tangent line is the value of $f'(a)$.

If you have a table of values, let's say you know $f(2.9), f(3), f(3.1)$, etc., but perhaps you have no info about $f(3.05)$. Then you can still estimate $f'(3)$ (in this what-if), by calculating the average rate of change over the smallest interval available in the data. For example, $f'(3) \approx \frac{f(3.1) - f(3)}{0.1} \approx \frac{f(3) - f(2.9)}{0.1}$. Perhaps a better estimate can be had by averaging those two to get: $f'(3) \approx \frac{f(3.1) - f(2.9)}{0.2}$.

Hope this helps!

1

Look into polynomial interpolation. This method gives a polynomial that goes through the required points. Then differentiate this function to get what you want.

EDIT: Indeed you may differentiate your spline polynomials to get the derivative you want.

  • 0
    @TheNerd, "I actually created this curve using a cubic spline formula" - you should have said that to begin with; of course you can differentiate the cubic polynomials that comprise your spline...2012-08-22
0

One way I know requires the assumption that the function is nice enough, i.e., analytic on the interval that you have some values available. Then you can write the Taylor series expansion of that function around the point you are trying to approximate the derivative. Evaluate these expressions at given points, then try to find a linear combination that gives the best approximation of the first derivative.

More generally, let's assume we have values at points $x_1 < \ldots < x_n$ which are not very far from each other, and we want to approximate $f^{(k)}(x_0)$ where $x_0$ is inside the interval $(x_1, x_n)$. The Taylor expansion of $f$ around $x_0$ is

$f(x) = \sum_{i=0}^\infty f^{(i)}(x_0) \frac{(x - x_0)^i}{i!}.$

Evaluate at $x_1, \ldots, x_n$ to get $n$ equations:

$f(x_k) = \sum_{i=0}^\infty f^{(i)}(x_0)\frac{(x_k - x_0)^i}{i!}, \quad k = 1, 2, \ldots, n.$

Suppose $h = \max_k\left|x_k - x_0\right|$ (or we can use the rougher estimate $h = x_n - x_1$). The above equation can be written as

$f(x_k) = \sum_{i=0}^{n - 1} f^{(i)}(x_0)\frac{(x_k - x_0)^i}{i!} + O(h^n), \quad k = 1, 2, \ldots, n.$

Ignoring the $O(h^n)$, we get a system of linear equations:

$f(x_k) = \sum_{i=0}^{n - 1} \hat f^{(i)}(x_0)\frac{(x_k - x_0)^i}{i!}, \quad k = 1, 2, \ldots, n$

where $\hat f$ is an approximation of $f$. You can write this as a matrix-vector equation

$ \begin{pmatrix} 1 & \frac{x_1 - x_0}{1!} & \frac{(x_1 - x_0)^2}{2!} & \ldots & \frac{(x_1 - x_0)^{n-1}}{(n-1)!} \\ 1 & \frac{x_2 - x_0}{1!} & \frac{(x_2 - x_0)^2}{2!} & \ldots & \frac{(x_2 - x_0)^{n-1}}{(n-1)!} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \frac{x_n - x_0}{1!} & \frac{(x_n - x_0)^2}{2!} & \ldots & \frac{(x_n - x_0)^{n-1}}{(n-1)!} \\ \end{pmatrix} \begin{pmatrix} \hat f(x_0) \\ \hat f'(x_0) \\ \vdots \\ \hat f^{(n-1)}(x_0) \end{pmatrix} = \begin{pmatrix} f(x_1) \\ f(x_2) \\ \vdots \\ f(x_n) \end{pmatrix}. $

Inverting the matrix on the left gives you formulas for approximating $f^{(k)}$ that are at least order $n - k$ accurate.

I think $\hat f$ is actually the Lagrange polynomial that passes through all the given points if you impose $\hat f^{(k)} = 0$ for $k \ge n$. However, the matrix-vector equation is much easier to use (at least in my opinion).

0

If you know all of the points along the curve then you have what is called a parametrisation. For example, the parabola $y = x^2$ can be parametrised by $t \mapsto (t,t^2).$ Meaning that for a fixed $t$, say $t=2$ you get a point on the curve $(2,2^2) = (2,4).$ Likewise, the unit circle, with equation $x^2 + y^2 = 1$, can be parametrised by $t \mapsto (\cos t, \sin t).$ Meaning that for each value of $t$, say $t = 0$, you get a point on the curve $(\sin 0, \cos 0) = (0,1).$

Let's say your parametrisation is given by $t \mapsto (x(t),y(t))$, where $x$ and $y$ are just two functions. When you mention the "first derivative", I assume you mean $dy/dx$. Using something called the "chain rule", we have:

$\frac{dy}{dx} = \frac{dy}{dt}\div\frac{dx}{dt} = \frac{dy}{dt} \times \frac{dt}{dx}.$

In the case of the circle, for $\sin t \neq 0$, we have:

$\frac{dy}{dx} = (\cos t) \times \left(\frac{1}{-\sin x}\right) = -\cot t.$