7
$\begingroup$

I currently use a Halton sequence to choose parameter sets for a prognostic model (e.g. using metabolic rate and protein content parameters to predict growth rate).

From my understanding, both a Halton sequence and a Latin Hypercube can be used to evenly sample parameter space.

I am reviewing a paper where the author uses a Latin hypercube in the same context that I am using a Halton sequence.

How are these approaches related? Are there conditions under which one would be more appropriate?

  • 0
    They're both "low discrepancy sampling methods", but the two algorithms look different to me... experimentation would probably be needed to see which of LH and Halton (and other sequences like Sobol and Niederreiter) would be best for your application.2011-04-25
  • 0
    Like J.M. says, it depends. What type of problem are you using them to solve? Numerical integration? Optimization/search? One drawback of Latin Hypercube is the inability to perform incremental sampling. If you're analyzing error in terms of the discrepancy of the samples, it makes sense to choose the method with lowest discrepancy measure (not sure what discrepancy of Latin hypercube is, but I have a feeling Halton beats it).2011-11-29
  • 0
    Another drawback of both techniques is whenever you want to look at multi-point correlations -- both techniques don't let the points cluster as much as a truly random selection would. I believe the Halton sequence does better.2011-11-29
  • 0
    @dls my problem is that I want to minimize the number of samples required to estimate a multivariate likelihood surface.2011-11-29

2 Answers 2