1
$\begingroup$

I have come across an Excel model that estimates the lognormal distribution parameters with maximum likelihood estimation and least squares estimation? What are the pros and cons of each method?

  • 1
    Might be a better question for CrossValidated. Or at least include more detail here. You have some data $x_1\ldots x_n$ and you're assuming they're independent log-normal samples with unknown mean and scale and trying to estimate the parameters? I've never heard of estimating distribution parameters by least squares. Least squares of what? Are we, like, curve fitting to a histogram or something? If it's a regression with log-normal errors or something like that, then that's a different story.2017-02-08
  • 0
    @spaceisdarkgreen : I agree that stats (dot) stackexchange (dot) com might do better with this than the present location of the question will. Note that "observations in a sample" rather than "samples" is the more standard term. And having "never heard of estimating distribution parameters by least squares" only means you haven't heard a whole lot.2017-02-08
  • 0
    @MichaelHardy Well I'll cop to not having heard a whole lot. It just wasn't clear to me what squares we'd be minimizing in the case of distribution fitting. The differences between empirical estimate of distribution and true distribution (a least squaresy version of a KS stat or curve fitting some density estimate? )? Tried googling and couldn't find anything. I realize this isn't my question but would you mind righting my confusion?2017-02-08
  • 0
    @MichaelHardy Like "regression method" on here? https://en.wikipedia.org/wiki/Probability_distribution_fitting#Techniques_of_fitting2017-02-08
  • 0
    @spaceisdarkgreen : Suppose $X_1,\ldots,X_n\sim\text{ i.i.d. } N(\mu,\sigma^2)$ and you want to estimate $\mu.$ In that case least squares coincides exactly with maximum likelihood: the value of $\mu$ that minimizes $(X_1-\mu)^2 + \cdots + (X_n-\mu)^2$ is the same as the maximum-likelihood estimate of $\mu$. And the usual use of least squares in the most usual sort of linear regression problems coincides with maximum likelihood when the errors are i.i.d $N(0,\sigma^2).$2017-02-08
  • 0
    $\ldots\,$but I haven't heard of least square being used with the lognormal distribution. However, that may also be because I don't know much about that.2017-02-08
  • 0
    "Lostwanderer": I think you should try posting this to stats (dot) stackexchange (dot) com.2017-02-08
  • 0
    @MichaelHardy gotcha, thanks. I suppose a distribution fit to iid is just a regression $x_i$ against $i$ with an assumption of no conditional dependence on $i$2017-02-08
  • 0
    @spaceisdarkgreen : Instead of "against $i$," I would call it "against $1$."2017-02-08
  • 0
    thanks everyone. Just to clarify - the excel model uses least square estimation to fit the empirical cumulative distribution of a sample... I hope this is clearer...2017-02-08
  • 0
    @MichaelHardy : I thought the least squares coincides with maximum likelihood only for normal distribution? In the Excel model I noticed that MLE and LSE give entirely different results2017-02-08
  • 0
    @lostwanderer : That they coincide for the normal distribution is what I said. As for getting different results from maximum likelilhood and least squares, please be explicit about what this "Excel model" is.2017-02-08

0 Answers 0