I'm sure this is a simple problem, but I'm not quite sure if I'm doing this right.
Let me preface this with I am a computer-science person, and will need a tractable solution that I can program.
In most programming languages, you have a function rand()
that will return a real number from 0 to 1. This is a uniform distribution.
However, I am trying to create a galaxy of stars. Thus they will be more dense toward the center, and less dense in the outer reaches of the galaxy.
I also want to set the maximum densities toward the center and minimum densities toward the outside edge. So for example, 5% density at the outer edge, and 60% density at the inner edge. And in between would be some scale of the cdf of the normal. (From which I will get and use the equation from wiki here.)
So I would do this
I was thinking of taking $coreDensity - edgeDensity$ and adjust the scale of the normal distribution to be
$f(x, e, c) = [\phi(x) \cdot (c-e)] + e$,
where we have the following:
$0 \leq e < c \leq 1$
$0 \leq x \leq 1$
$\displaystyle \phi(x) = \frac{1}{2}\left[1+\mathrm{erf}\left(\frac{x-\mu}{(2\sigma^2)^{1/2}}\right)\right]$
But this is where I get stuck. I'm not sure which values I should use for $\sigma$ and $\mu$ as well as I know $\mathrm{erf}$ is some error function, but is there a simple non integral solution to it?
Or is there a better way to do this?