4
$\begingroup$

Aim is to estimate an error on a stochastic event rate. I read out the event counter second-wise, every black $1$ is a counted event (new events over time, see the plot below).

During the measurement I am estimating the event rate, so as more statistics is accumulated, mean event rate (red) should asymptotically become more accurate.

dyn_mean1

as one can see, the mean value oscillates around true value of 0.5

dyn_mean2

even after one order of magnitude more events collected.

Practical question: How can one calculate the number of events needed to estimate the mean value to a maximum error ($0.5\pm \sigma$)? - error should fall by $\sqrt{N}$

Theoretically: Can this oscillation be described analytically? Can you suggest further reading?

The events are radiation counts, so they are uncorrelated, may by Poisson-distribution applied?

addendum: idealized first approximation - every 10th event is non-zero: dyn_mean_reg1

may be this curve is superimposed with the realer-life example above, is here any techniques of partitioning in arbitrary functions applicable?

  • 3
    I think you need to describe your sampling/measurement process more clearly. But, it is not typical to get any hard bounds on sample size (number of events) to achieve a *guaranteed* maximal error. Some probabilistic statement is usually involved.2011-08-17
  • 0
    [tag:experimental-mathematics] sorta kinda means something else... :)2011-08-17
  • 0
    better understandable?2011-08-17

1 Answers 1