4
$\begingroup$

Given this function and an initial point, find the next root:

$$ \begin{align} f(t) & = -L\\ & {} + A \sin(\Theta_1 + \omega_1 t) \\ & {} +B \cos(\Theta_1 + \omega_1 t)\\ & {} - (P_x + V_x t )\sin(\Theta_2 + \omega_2 t)\\ & {} + (P_y + V_y t) \cos(\Theta_2 + \omega_2 t) \end{align} $$

(where everything is known except for $t$),

starting at $t_0$, I'm trying to find the "next" root $t_1$. That is, in the interval $(t_0, t_1]$ there is only one root and it's $t_1$. $t_0$ can be the previous root, but it doesn't necessarily need to be.

This is for an automated system (for collision detection. The above equation is related to the motion of a point and a line) so I can't eyeball the function or manually give hints, and not finding an answer would be very bad. And I don't want to miss any roots so I don't want to set up random seed guesses and build brackets from those if they happen to straddle an odd root (it couldn't catch even roots anyway). But I can make the simplifying assumption that if two roots are within some epsilon of each other they're the same root as far as I'm concerned.

Here's the algorithm I have right now:

I'm defining a "window of interest", which is a min and max 't' value I'm interested in finding roots within. There might be many or no roots inside this window of interest; it's just a way of slicing up the function to something more managable. Within that window of interest I find a maximum value for the second derivative. Then I use that, an evaluation of the first derivative, and an evaluation of the function itself at the current $t$ value to construct a quadratic interpolation to underestimate the distance to the next root (a "safe" step foward, where we're guaranteed we won't miss a root). Then I can scoot forward my iteration by this amount, and rinse and repeat (this is called "conservative advancement" in the literature I have). If I get into a spot where I get stuck because the function and the first derivative are 0, I choose a step size of an epsilon and scoot forward and try again. Once I get "close enough" to a new 0 (again, to some epsilon), I switch to Newton's Method to refine the root.

My Question:

The whole algorithm above works okay, but it's neither very performant or very clever. The "conservative advancement", especially when I have to resort to just walking forward by an epsilon, is really really slow (can takes hundreds of function evaluations to get to a point where I can switch over to Newton's method). And I have no real way of checking if Newton's method is going to refine itself to the root I'm interested in or not when I do switch over.

Are there any more clever techniques I could use instead of what I'm doing? The function itself is a sum of sinusoids, so I thought I could do something clever with the frequencies of each sinusoid to bracket at least the local min/max, and from that maybe the basins of attraction for Newton's method, but in the end I couldn't figure out anything that was actually useful. Any ideas or literature people could point me to that would be useful would be great.

Update:

... Thinking about it a bit more. I feel like it should be possible to split the function up in to intervals using the angular velocities (the $\omega$ terms), such that each interval contains no more than one local extrema (some sort of pigeonhole principal-esque idea). If the interval has an even root, it has only one such root in the interval and we know it's the local extrema. Odd roots might be more easily bracketed in this context, too (I think; haven't worked it out yet, really).

Update 2: Wolfram alpha seems to be able to find analytic roots for A*sin(B*t) + C*sin(D*t). Anyone have any idea how they do that? That's not a result I've been able to derive on my own.

  • 1
    Just out of curiosity: if this equation describes the motion of a point and a line, is there any way that you could set it up differently, as a system of ODEs, where the collision event is described by the root of an equation? There are event detection algorithms for numerical integrators that work well in such circumstances.2012-02-06
  • 0
    I'm not all that familiar with ODEs (beyond that course in college :P). The collision event _is_ described by the root of this function, if that helps. I'm pretty familiar with collision detection algorithms in general, but most try to sidestep actually forming and solving the equation of motion, because it often doesn't permit a closed form solution and you get in to hairy numerical issues like I'm trying to solve here :)2012-02-06
  • 0
    Is there an relation between $\omega_1$ and $\omega_2$?2012-02-07
  • 0
    $\omega_2 = \omega_3 - \omega_1$. I actually calculate $\omega_2$ with that expression, but it's not all that useful here because $\omega_3$ and $\omega_1$ are entirely independent. So short answer: no, not really.2012-02-07
  • 0
    I don't think the current route completely avoids "hairy numerical issues" either; linear combinations of trigonometric functions are particularly challenging for nonlinear equation solvers precisely because they are quite oscillatory, such that a slight change in starting point can have the underlying iterative method converging to vastly varying values. For an analytic approach, I am always skeptical about closed form solutions for transcendental equations of appreciable complexity...2012-02-09
  • 0
    I definitely don't think there's a general analytic solution. The conservative advancement lets me reliably sneak up on a root (it can't suffer from cycling issues, or overshoot a root for another, for instance), even if it's slow.2012-02-09
  • 0
    Per your update #2: I expect W|A should only be able to solve that when $B$ and $D$ are linearly dependent over $\mathbb{Q}$, for which the solutions are inverse trigonometric functions of algebraics.2012-02-10
  • 0
    @anon That sounds interesting. Can you point me in the right direction as far as links online and/or write it in an answer (I'm guessing a comment wouldn't be big enough).2012-02-10
  • 1
    If $B/D\in\mathbb{Q}$ then we may write $D=\frac{p}{q}B$. We can do the variable change $u=\frac{1}{q}Bt$ and then have $$A\sin(qu)+C\sin(pu)=0.$$ Split $\sin$ into complex exponentials and write $v=e^{iu}$; we now have a polynomial in $v$.2012-02-10
  • 0
    @anon So I've got something like $Av^q - Av^{-q} + Bv^p - Bv^{-p} = 0$. But I'm not sure how to get from there to an analytic function for the roots of v. Even factoring out $v^{-q}$ or something to get all the powers positive, the degree of the polynomial is (potentially) really large and I don't see anything interesting you can say about $v^{n}$ to help simplify it.2012-02-10
  • 0
    Polynomials have roots as (more or less analytic) hypergeometric functions of their coefficients. I'm not sure if there's any practical implementation of this, you'd have to search around.2012-02-10
  • 0
    @anon Okay, we've left the realm of mathematics I'm even remotely familiar with :) Can you point me to a link or a book? From wikipedia, it sounds like "hypergeometric function" corresponds to a single function, so numerically calculating it is feasible (as it is with trig functions). But I can't find any reference to how it relates to the roots of polynomials.2012-02-11
  • 0
    See the last paragraph of [this section](http://en.wikipedia.org/wiki/Polynomial#Solving_polynomial_equations) on Wikipedia's *Polynomials* entry. You'll have to reference chase if you're interested - honestly I think there's only a handful of references on the planet and you're unlikely to get your hands on them, or even understand them if you did - but there is virtually no work that I'm aware of, of putting this to general use in actual computer algorithms. You'll have to settle for approximations if you're looking for numerically purposable methods.2012-02-11
  • 0
    let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/2460/discussion-between-jay-lemmon-and-anon)2012-02-11
  • 0
    Jay: I'm aware of the extended discussions warning, but nobody really bothers with that around here. :) (Though a lot of the time, yes, you can find me in chat.)2012-02-12
  • 0
    @anon, ah, wasn't sure if I should or not :P So yeah, I think it's interesting mathematics. Basically extract out the transcendental components and you're left with a polynomial to solve. But solving large degree polynomials is notoriously ill-conditioned so I'm not sure it's a tractable approach at the end of the day, all other issues aside. But I appreciate the help here at least :)2012-02-12
  • 0
    They are ill-conditioned in general, but are they just as ill-conditioned with the special form of polynomial here? (I.e. two distinct nonzero coefficients, exactly four terms with nonzreo coefficients.)2012-02-12
  • 0
    @anon Ah, that's an interesting idea. I'm not sure actually.2012-02-13

0 Answers 0