2
$\begingroup$

I need to estimate $\pi$ using the following integration:

$\int_{0}^{1} \!\sqrt{1-x^2} \ dx$

using monte carlo

Any help would be greatly appreciated, please note that I'm a student trying to learn this stuff so if you can please please be more indulging and try to explain in depth..

  • 0
    Perhaps see `http://math.stackexchange.com/questions/1635250/` for more on Monte Carlo integration.2016-02-01

4 Answers 4

7

Generate a sequence $U_1,U_2,\ldots$ of independent uniform$[0,1]$ random variables. Let $Y_i = f(U_i)$, where $f(x)=\sqrt{1-x^2}$, $0 \leq x \leq 1$. Then, for sufficiently large $n$, $ \frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n} \approx \int_0^1 {f(x)\,{\rm d}x = } \int_0^1 {\sqrt {1 - x^2 } \,{\rm d}x} = \frac{\pi }{4}. $

EDIT: Elaborating.

Suppose that $U_1,U_2,\ldots$ is a sequence of independent uniform$[0,1]$ random variables, and $f$ is an integrable function on $[0,1]$, that is, $\int_0^1 {|f(x)|\,{\rm d}x} < \infty$. Then, the (finite) integral $\int_0^1 {f(x)\,{\rm d}x}$ can be approximated as follows. Let $Y_i = f(U_i)$, so the $Y_i$ are independent and identically distributed random variables, with mean (expectation) $\mu$ given by $ \mu = {\rm E}[Y_1] = {\rm E}[f(U_1)] = \int_0^1 {f(x)\,{\rm d}x}. $ By the strong law of large numbers, the average $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n}$ converges, with probability $1$, to the expectation $\mu$ as $n \to \infty$. That is, with probability $1$, $\bar Y_n \to \int_0^1 {f(x)\,{\rm d}x}$ as $n \to \infty$.

To get a probabilistic error bound, suppose further that $f$ is square-integrable on $[0,1]$, that is $ \int_0^1 {f^2 (x)\,{\rm d}x} < \infty $. Then, the $Y_i$ have finite variance, $\sigma_2$, given by $ \sigma^2 = {\rm Var}[Y_1] = {\rm E}[Y_1^2] - {\rm E}^2{[Y_1]} = {\rm E}[f^2{(U_1)}] - {\rm E}^2{[f(U_1)]} = \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 . $ By linearity of expectation, the average $\bar Y_n $ has expectation $ {\rm E}[\bar Y_n] = \mu. $ Since the $Y_i$ are independent, $\bar Y_n $ has variance $ {\rm Var}[\bar Y_n] = {\rm Var}\bigg[\frac{{Y_1 + \cdots + Y_n }}{n}\bigg] = \frac{1}{{n^2 }}{\rm Var}[Y_1 + \cdots + Y_n ] = \frac{n}{{n^2 }}{\rm Var}[Y_1 ] = \frac{{\sigma ^2 }}{n}. $ By Chebyshev's inequality, for any given $\varepsilon > 0$, $ {\rm P}\big[\big|\bar Y_n - {\rm E}[\bar Y_n]\big| \geq \varepsilon \big] \leq \frac{{{\rm Var}[\bar Y_n]}}{{\varepsilon ^2 }}, $ so $ {\rm P}\big[\big|\bar Y_n - \mu \big| \geq \varepsilon \big] \leq \frac{{\sigma^2}}{{n \varepsilon ^2 }}, $ and hence $ {\rm P}\bigg[\bigg|\bar Y_n - \int_0^1 {f(x)\,{\rm d}x} \bigg| \geq \varepsilon \bigg] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 \bigg \rbrace. $ So if $n$ is sufficiently large, with high probability the absolute difference between $\bar Y_n$ and $\int_0^1 {f(x)\,{\rm d}x}$ will be smaller than $\varepsilon$.

Returning to your specific question, letting $f(x)=\sqrt{1-x^2}$ thus gives $ {\rm P}\Big[\Big|\bar Y_n - \frac{\pi }{4} \Big| \geq \varepsilon \Big] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {(1 - x^2) \,{\rm d}x} - \frac{\pi^2 }{16} \bigg \rbrace = \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \frac{2}{3} - \frac{\pi^2 }{16} \bigg \rbrace < \frac{1}{{20n\varepsilon ^2 }}, $ where $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {\sqrt {1 - U_i^2 } } }}{n}$.

  • 0
    wow.. awesome Shai... thanks!2011-01-27
3

Let's also elaborate on Ross Millikan's answer, adapted to the case $f(x)=\sqrt{1-x^2}$, $0 \leq x \leq 1$. Suppose that $(X_1,Y_1),(X_2,Y_2),\ldots$ is a sequence of independent uniform vectors on $[0,1] \times [0,1]$, so that for each $i$, $X_i$ and $Y_i$ are independent uniform$[0,1]$ random variables. Define $Z_i$ as follows: $Z_i = 1$ if $X_i^2 + Y_i^2 \leq 1$, $Z_i = 0$ if $X_i^2 + Y_i^2 > 1$, so the $Z_i$ are independent and identically distributed random variables, with mean $\mu$ given by $ \mu = {\rm E}[Z_1] = {\rm P}[X_1^2 + Y_1^2 \leq 1] = {\rm P}\big[(X_1,Y_1) \in \lbrace (x,y) \in [0,1]^2 : x^2+y^2 \leq 1\rbrace \big] = \frac{\pi }{4}, $ where the last equality follows from ${\rm P}[(X_1,Y_1) \in A] = {\rm area}A$ ($A \subset [0,1]^2$).

By the strong law of large numbers, the average $\bar Z_n = \frac{{\sum\nolimits_{i = 1}^n {Z_i } }}{n}$ converges, with probability $1$, to the expectation $\mu$ as $n \to \infty$. That is, with probability $1$, $\bar Z_n \to \frac{\pi }{4}$ as $n \to \infty$.

To get a probabilistic error bound, note first that the $Z_i$ have variance $\sigma^2$ given by $ \sigma^2 = {\rm Var}[Z_1] = {\rm E}[Z_1^2] - {\rm E}^2{[Z_1]} = \frac{\pi }{4} - \Big(\frac{\pi }{4}\Big)^2 = \frac{\pi }{4} \Big(1 - \frac{\pi }{4}\Big) < \frac{10}{59}. $ The average $\bar Z_n $ has expectation ${\rm E}[\bar Z_n] = \mu$ and variance ${\rm Var}[\bar Z_n] = \sigma^2 / n$; hence, by Chebyshev's inequality, for any given $\varepsilon > 0$, $ {\rm P}\big[\big|\bar Z_n - \mu \big| \geq \varepsilon \big] \leq \frac{{\sigma^2}}{{n \varepsilon ^2 }}, $ and so $ {\rm P}\bigg[\bigg|\bar Z_n - \frac{\pi }{4} \bigg| \geq \varepsilon \bigg] < \frac{{10}}{{59n\varepsilon ^2 }}. $

  • 0
    I'll definitely have to review my math!!2011-01-28
2

No longer applicable as the integral has been corrected: I don't understand how this integral gets you $\pi$ and if you pull the $x$ out you get $\int_0^1x\sqrt{10}dx=\frac{\sqrt{10}}{2}$

The general idea of a Monte Carlo integration of $\int_0^1 f(x)dx$ is to take random pairs $(x,y)$ with $0\le x\le 1$ and $0\le y \le y_{max}$ and check for how many of the pairs $f(x)\le y$. The integral is then the number of pairs under the curve divided by the number of trials and multiplied by the area of the box ($x y_{max}$)

  • 0
    @Ross in was actually c# i also posted the source code below in case anyone else needs it2011-01-26
1

C# implementation. Thanks to all who contributed!! Ross, Esteban, Shai

class Program {     static void Main(string[] args)     {         double x = 0;         var rd = new Random();          double sum = 0;         for (int i = 0; i < 1000000; i++) // the higher the more precise         {             x = rd.NextDouble();             sum += Math.Sqrt(1 - x*x);          }          Console.WriteLine("Pi={0}", (sum/1000000)*4);         Console.Read();     } } 
  • 0
    google searchable: sqrt(1-x^2) dx pi estimate integration algorithm2011-01-26