7
$\begingroup$

I currently use a Halton sequence to choose parameter sets for a prognostic model (e.g. using metabolic rate and protein content parameters to predict growth rate).

From my understanding, both a Halton sequence and a Latin Hypercube can be used to evenly sample parameter space.

I am reviewing a paper where the author uses a Latin hypercube in the same context that I am using a Halton sequence.

How are these approaches related? Are there conditions under which one would be more appropriate?

  • 0
    They're both "low discrepancy sampling methods", but the two algorithms look different to me... experimentation would probably be needed to see which of LH and Halton (and other sequences like Sobol and Niederreiter) would be best for your application.2011-04-25
  • 0
    Like J.M. says, it depends. What type of problem are you using them to solve? Numerical integration? Optimization/search? One drawback of Latin Hypercube is the inability to perform incremental sampling. If you're analyzing error in terms of the discrepancy of the samples, it makes sense to choose the method with lowest discrepancy measure (not sure what discrepancy of Latin hypercube is, but I have a feeling Halton beats it).2011-11-29
  • 0
    Another drawback of both techniques is whenever you want to look at multi-point correlations -- both techniques don't let the points cluster as much as a truly random selection would. I believe the Halton sequence does better.2011-11-29
  • 0
    @dls my problem is that I want to minimize the number of samples required to estimate a multivariate likelihood surface.2011-11-29

2 Answers 2

2

I am not aware of any theoretical results which allow a comparison to be made (unless you can compare discrepancy measures). I spent a good amount of time digging through the literature in the context of numerical integration, though I'm not an expert. I wasn't looking specifically for an answer to your question, but it was always in the back of my mind. Here are two papers which make experimental comparisons between sampling methods. The first would probably be of most interest to you.

L. P. Swiler, R. Slepoy, A. A. Giunta, Evaluation of sampling methods in constructing response surface approximations, Sandia National Laboratories.

Saliby, E., Pacheco, F., An empirical evaluation of sampling methods in risk analysis simulation: quasi-Monte Carlo, descriptive sampling, and latin hypercube sampling, Simulation Conference, 2002. Proceedings of the Winter.

  • 0
    thank you for helping me find the relevant literature2011-12-05
0

Both methods are initially created to reduce the variance in MC integration, and LDS (like Halton) is usually superior. Both are also popular in design-of-experiments and global optimization. But the better characteristics of LDS are mainly proven regarding multidimensional integration by MC. How many parameters do you have? Halton works only fine for d>10 (roughly)! LHS has no limitations on #dimensions, but is also giving usually less accuracy. Only in 1D LHS is best. A 2nd advantage of LDS is that it supports to extend the count. This is useful if you need more points because you found accuracy is not enough. Then LDS allows full re-use of the earlier results.