Consider an arbitrary discrete probability distribution with sample space $\Omega$ and let $\omega\subset\Omega$. Let $n$ denote the amount of independent trials of an experiment that are performed and let $\operatorname{f}(n)$ equal the amount of times $\omega$ occurs during those $n$ trials.
It is my understanding that $\operatorname{P}(\omega)=\lim_{n\to\infty}\operatorname{f}(n)/n$. Is $\operatorname{f}$ essentially "pure randomness"? I mean we can't necessarily be certain about what value we acquire from $\operatorname{f}$ when evaluated at $n$. I'm used to a function giving me the same number when I iteratively evaluate it at the same number, but this isn't the case now is it? Does it make sense for $\operatorname{f}$ to exist philosophically?
If "pure randomness" determines the value of $\operatorname{f}(n)$ in the sense that we can never be $100$% certain about we value it will yield, how do we define "pure randomness"?
Since $\operatorname{f}$ is not a normal function like those in calculus, how do we define the convergence of $\operatorname{P}(\omega)=\lim_{n\to\infty}\operatorname{f}(n)/n$? Does the $\delta$-$\epsilon$ kind of definition apply here as well? How rigorous is this definition generally speaking?
In addition to that, how do we define probability for continuous probability distribution in a more rigorous way?
