5
$\begingroup$

let $P_0,\ldots, P_k\in \mathbb{R}^2$ be a set of points. Furthermore let $\epsilon\in \mathbb{R}$.

Now I am trying to find non-trivial lower and upper bounds for

$$ \sum_{i=1}^k ||Q_i-Q_{i-1}||^2\quad \text{w.r.t.}\quad ||P_i-Q_i||\leq \epsilon\quad\text{for all} \quad i=0,\ldots,k $$


I know there is the trivial bound $$ \sum_{i=1}^k \left(||P_i-P_{i-1}||-2\epsilon\right)^2\leq\sum_{i=1}^k ||Q_i-Q_{i-1}||^2\leq \sum_{i=1}^k \left(||P_i-P_{i-1}||+2\epsilon\right)^2 $$ (using the triangle inequality).

However, for most combinations of $P_i$, these bounds are rather weak. Do you know a better approach for getting a tight bound on that sum? Of course I could always use some numerical optimization, but then you have to trouble yourself with bracketing the minimum and maximum, which I would rather like to avoid. I would much prefer a clean geometric or analytical solution.

Thanks for your thoughts.

P.S.: I am having quite some trouble tagging this question to the right field, as it seems so elementary. You are welcome to set better tags than I did...


Edit 1

Two thoughts:

  • You can show that it is either $P_i=Q_i$ or $||P_i-Q_i||=\epsilon$ when minimizing or maximizing the sum of squares
  • You can write the problem as a quadratically constrained quadratic program. However, this seems so much trouble for such a small problem...
  • 1
    Hey, welcome on StackExchange. I think the tags fit the question well, as a lower/upper bound you could use the (rather trivial) optimal solution when you minimize $\sum_{i=1}^k ||Q_i-Q_{i-1}||$ with the same boundary conditions but maybe someone here can find a better bound.2011-06-30
  • 0
    Well, I honestly do not see that minimizing $S_2:=\sum_{i=1}^k||Q_i-Q_{i-1}||$ is less complicated than minimizing $S_1:=\sum_{i=1}^k||Q_i-Q_{i-1}||^2$. Furthermore I only understand $S_1\leq S_2$, meaning this will only give me an upper bound, but no lower bound... Am I missing something?2011-06-30
  • 0
    Maybe you want to draw an image of the situation then you will see how to choose the $Q_i$ such that $S_2$ is minimized. The bounds are given if you use the same $Q_i$ in the formula of $S_1$. Then the optimal $Q_i$'s for the maximal and minimal sum will make $S_1$ even bigger/smaller.2011-06-30
  • 0
    I still have problems following your line of thought. For the first argument: Of course I have an assumption looking at different images of the situation. However, I have learned the hard way not to trust images. It is too easy to forget something. Thus I am looking for a *proof* for the bounds - and even with a clear idea what I expect I do not seem to be able to write down a formal argument. I am open for suggestions, though.2011-06-30
  • 0
    For your second argument: Suppose all the $P_i$ are on one line. Then, for $S_2$ the minimum value requires clearly that the $Q_i$ also are on that line. Then, the minimum for $S_2$ is totally independent on the position of $Q_1,\ldots, Q_{k-1}$. However, minimizing $S_1$ is certainly not independent of those, as one would always try reduce the variation of the summands. Therefore, your second argument without further additions is not valid in this case.2011-06-30
  • 0
    I don't think the minimum of $S_2$ is independent of the position of $Q_1,\ldots,Q_{k-1}$ maybe I did not understand the question, your situation forces the $Q_i$ to be all equal to $P_0$ but this also minimizes $S_1$ because the variation of the summands will be 0 too. My idea was just not to minimize the sum of the square of the variations but the sum of the variations, which is easier.2011-06-30
  • 0
    [Sorry, by mistake hit the *move to chat*-Link]. Certainly, $Q_i=P_0$ would be a solution in the case where $||P_i-P_0||\leq \epsilon$ for all $i$. However, this may (and in my application will) not be true all the time. Under this assumption your reasoning is sound, without that assumption, however, I fear, it is not.2011-06-30
  • 0
    Now I understand what you mean, I agree but its just a heuristic which gives you a bound. Clearly it is not optimal.2011-06-30
  • 0
    Have you tried applying Lagrange multipliers and seeing if it gets you anywhere?2011-06-30
  • 0
    @Rahul: I did. However, as far as I can see, I get so many critical points that eliminating those would be worth another question...2011-07-01

1 Answers 1

1

The first thing you could do is bound pairs of adjacent edges, instead of just the edges one by one.

That is, let $d_i = \lVert Q_{i-1} - Q_i\rVert^2 + \lVert Q_i - Q_{i+1}\rVert^2$, so that your objective function is precisely $\frac{1}{2}\big( \lVert Q_0 - Q_1\rVert^2 + \sum_{i=1}^{k-1} d_i + \lVert Q_{k-1} - Q_k\rVert^2 \big)$. It should be possible to find good bounds on $d_i$ in terms of the positions $P_{i-1}$, $P_i$, and $P_{i+1}$. I'll come back and fill this out when I have time.

By the way, it's not true that either $P_i = Q_i$ or $\lVert P_i - Q_i\rVert = \epsilon$. For $k = 2$, let $P_0 = (-1, 0)$, $P_2 = (1,0)$, and choose any $P_1$ at a distance between $0$ and $\epsilon$ from the origin. The minimum is always attained at $Q_0 = (-1+\epsilon,0)$, $Q_1 = (0,0)$, $Q_2 = (1-\epsilon,0)$. For the maximum, on the other hand, I think it's true that $\lVert P_i - Q_i\rVert = \epsilon$, because you're minimizing a function that's convex with respect to any $Q_i$.