2
$\begingroup$

I am looking for a method to solve (if it is possible) the following problem

Find the best increasing function $f: \mathbb{R} \rightarrow \mathbb{R}$ fitting the conditions

$$f(x_i)^2 - f(x_j)^2 = c_{ij}, \tag{1}$$ in terms of $\mathcal{l}^2$ norm. Assume that $x_i, x_j,c_{ij}$, with $i,j \in \Omega \subset \mathbb{N},$ are known.

A candidate tool to tackle this is the isotonic regression, but we should know the values of $f(x_i), ~\forall i \in \Omega$, which is not the case. Moreover, the method should be adapted to take the constraints into account.

I have also tried to rewrite the problem as a functional optimization problem by assuming a differentiate function $f$ such that $f'(x) \geq 0$ and, thus, use some tool such as the Euler-Lagrange equation to deal with, but I stuck in the formulation of the problem.

Insted of $(1)$, the problem can be generalized for any conditions $$g_n\left(f(x_i),f(x_j), f(x_k),\ldots\right) = 0,~~n=1,\ldots N.$$ and I am curious to know what kind of mathematical tool can be employed in these problems.

I appreciate any help. Thanks in advance!

  • 1
    So the set of values for the $c_{ij}$ is considered as given? It may be that there are choices of these $c_{ij}$ for which no such $f$ exists.2017-02-06
  • 0
    This is a very interesting problem. For the sake of clarity: you start with a set of $n $ values, each corresponding to a pair of integers $i,j $, forming our set of $c_{i,j} $. Before thinking to their fitting function: can the values of $i $ or $j$ be equal in different $c_{i,j} $ terms? Also, considering the range of $i,j $, do the $c_{i,j} $ terms cover all possible pairs of $i,j $ or can be referred only to part of all possible pairs? For example, could we have an initial configuration of $n=5$ values, labeled as $c_{1,2} $, $c_{3,4} $, $c_{4,5} $, $ c_{1,3} $, $ c_{2,6} $?2017-02-07
  • 0
    @Anatoly: sorry, but I do not agree, this is not an interesting problem, it is just a minor twist on the classical problem of finding the best approximation in the $\ell^2$ norm, also because the OP did not really mentioned what *best* (approximation) means here.2017-02-07
  • 0
    I am not talking about any absolute truth (is there any?), just giving my opinion. I doubt this question can really be useful to anyone, so I used my vote power and gave it a (-1), that is all.2017-02-08
  • 0
    I am not claiming any jedi power, "not useful" (in my opinion) is one of the possible reasons for downvoting a question (try to put your mouse over the down arrow). No offense intended.2017-02-08
  • 0
    @AlexSilva Can you edit your question to show what you mean by best?2017-02-09
  • 0
    So minimizing $\sum_{ij} \left( f(x_i)^2 - f(x_j)^2 - c_{ij} \right)^2$?2017-02-09

1 Answers 1

3

We might as well assume the $x_i$ are increasing, otherwise we can relabel them to be so. Then as you only care about the value of $f$ at these points we might as well define $f$ to be linear between the $x_i$. Now if we start counting $i$ from $1$ we have defined $f$ by $f(x_1)$ and the slopes $m_i=\frac {f(x_{i+1})-f(x_i)}{x_{i+1}-x_i}$. You have a constrained multidimensional optimization problem with error function $\sum (c_{ij}-(f(x_i)^2-f(x_j)^2))^2$ where the constraints are $m_i \ge 0$. You can feed it to your favorite minimizer. It seems the only thing special is choosing where $f$ crosses $0$ which might induce some strange behavior of your error function. If that seems to be a problem you could try adding in terms $Am_i^2$ when $m_i \lt 0$ and removing the constraints. If you make $A$ large it will have a similar effect to your constraints but everything will be differentiable. You will then find where $f(x)$ should cross zero and can then impose that and refit.

  • 0
    Such routines are in any numerical analysis text. I like the discussion in [Numerical Recipes](http://numerical.recipes/). Obsolete versions are free on-line. I agree it is not clear that $f$ needs to cross zero. It just seems that gives some more freedom, so may allow a better fit. The minimizers do well when the error is a nice continuous function of the parameters and this seems like it may be a pitfall.2017-02-07
  • 0
    Chapter 10 of Numerical Recipes has a nice discussion of function minimization. Code is available in Fortran and C, ready to download.2017-02-07
  • 0
    I do not see anything wrong in Ross Millikan's approach (so (+1) back). By setting $y_i=f(x_i)^2$ this turns out to be a simple minimization problem, with the extra constraint that $f$ has to be increasing. It is not very interesting to solve this problem by hand and plenty of dedicated softwares can solve it really fast. Once one gets the optimal $y_i$s, also gets a lot of fitting functions by polynomial interpolation.2017-02-07
  • 0
    Nothing in the problem depends on the values of $f$ between the $x_i$ except that $f$ has to be increasing. In my original answer I suggested liner interpolation between the points. That gives $m_i$ as the variables that are being fitted and making sure each is positive guarantees that $f$ is increasing.2017-02-08