0
$\begingroup$

Let's say we have a smooth function $f:\mathbb{R}^{1000000} \rightarrow \mathbb{R}$, which we want to minimize using a method from numerical optimization. which method would we choose? Is the conjugate gradient method the best choice? What methods are better than others in the minimization process of high-dimensional problems?

Thank you very much for your time!

  • 2
    The answer is "it depends". The basic conjugate gradient algorithm only works when the function $f$ is a quadratic form, and it works best when the problem is sparse. Is that the case here? There are also nonlinear conjugate gradient methods that work with numerical derivatives, that may work well for you. Randomization algorithms like [simulated annealing](http://en.wikipedia.org/wiki/Simulated_annealing) can also work well in high dimensions. Can you give any more information about the problem?2012-07-21
  • 2
    I would echo @ChrisTaylor's remarks. The form of $f$ matters too. Do you have an explicit derivative? Is $f$ separable in some way, is it convex, etc...2012-07-21
  • 0
    I don't have any details on the form of $f$. I was just wondering if there are some preferred algorithms when you have functions of high dimensions. It's not a specific question about something particular, I was just wondering.2012-07-22

1 Answers 1

0

I don't know much about it, but what people do in the case Support Vector Machine kind of problems is to use the KKT conditions to dualize the problem and make it dimension independent. You could try to read about it and see if similar ideas can be applied in your case, for example http://en.wikipedia.org/wiki/Support_vector_machine

  • 0
    Just to make clear dimension independent means that the solution time does not depend of the dimension in some sense, but you still have to compute the dot products or the chosen kernel2012-07-21
  • 0
    The asker didn't say anything about constraints, so neither the KKT conditions nor duality apply.2012-07-22