3
$\begingroup$

The data are for the model $T(t) = T_{s} - (T_{s}-T_{0})e^{-\alpha t}$, where $T_0$ is the temperature measured at time 0, and $T_{s}$ is the temperature at time $t=\infty$, or the environment temperature. $T_{s}$ and $\alpha$ are parameters to be determined.

How can I fit my data against this model? I'm trying to solve $T_{s}$ by $T_{s}=(T_{0}T_{2}-T_{1}^{2})/(T_{0}+T_{2}-2T_{1})$, where $T_{1}$ and $T_{2}$ are measurements in time $\Delta t$ and $2\Delta t$, respectively.

However, the results are varying a lot through the whole data set.

Shall I try gradient descent for the parameters?

  • 0
    If you use gradient descent, what cost function are you going to minimize?2012-11-15
  • 2
    I think $T_s$ was actually supposed to be the temperature at time $t = \infty$ (at steady state). Perhaps there was an incorrect edit.2012-11-15
  • 0
    Yes, t=∞, indeed. The environment is thought to be a source whose capacity is big enough to keep its temperature stable.2012-11-15

3 Answers 3

3

Gradient descent might be overkill.

For convenience, use a temperature scale translated so that $T_0=0$ and the model is

$$T(t)=T_s(1-e^{-\alpha t}).$$

You want to minimize

$$E=\sum_i(T_i-T_s(1-e^{-\alpha t_i}))^2.$$

Setting an arbitrary value for $\alpha$, the least-squares estimate of $T_s$ is given by

$$\hat T_s(\alpha)=\frac{\sum_iT_i(1-e^{-\alpha t_i})}{\sum_i(1-e^{-\alpha t_i})^2},$$

from which you deduce

$$\hat E(\alpha)=\sum_i(T_i-\hat T_s(1-e^{-\alpha t_i}))^2.$$

The optimal $\alpha$ is found by unidimensional optimization.

1

This is a problem of non-linear regression. Usually one solve it thanks to some iterative computation process starting with guessed values of the parameters. The Levenberg–Marquardt algorithm is commonly used.

A non-conventional approach (not iterative, no initial guess) is described in the paper : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales

The case of exponential model is treated page 17.

The notations correspond to : $x=t\quad;\quad y=T\quad;\quad a=T_s\quad;\quad b=(T_s-T_0)\quad;\quad c=-\alpha$

The calculus is very simple (copy below and numerical example) :

enter image description here

enter image description here

0

The model is nonlinear, but one of the parameters, $T_{s}$ is linear, which means we can 'remove' it.

Start with a crisp set of definitions: a set of $m$ measurements $\left\{ t_{k}, T_{k} \right\}_{k=1}^{m}.$ The trial function, as pointed out by @Yves Daust, is $$ T(t) = T_{s} \left( 1 - e^{-\alpha t}\right). $$ The $2-$norm minimum solution is defined as $$ \left( T_{s}, \alpha \right)_{LS} = \left\{ \left( T_{s}, \alpha \right) \in \mathbb{R}_{+}^{2} \colon r^{2} \left( T_{s}, \alpha \right) = \sum_{k=1}^{m} \left( T_{k} - T(t_{k}) \right)^{2} \text{ is minimized} \right\}. $$

The minimization criterion $$ \frac{\partial} {\partial \alpha} r^{2} = 0 $$ leads to $$ T_{s^{*}} % = \frac{\sum T_{k} \left( 1 - e^{-\alpha t_{k}} \right)} {\sum \left( 1 - e^{-\alpha t_{k}} \right)^{2}}. $$

Now the total error can be written is terms of the remaining parameter $\alpha$: $$ r^{2}\left( T_{s^{*}}, \alpha \right) = r_{*}^{2} ( \alpha ) = \sum_{k=1}^{m} \frac{\sum T_{k} \left( 1 - e^{-\alpha t_{k}} \right)} {\sum \left( 1 - e^{-\alpha t_{k}} \right)^{2}} \left( 1 - e^{-\alpha t_{k}} \right). $$

This function is an absolute joy to minimize. It decreases monotonically to the lone minimum, then increases monotonically.