I got introduced to the characterization of probability distributions as follows: If a random variable $X$ is given, we determine the behaviour of its probability distribution and then write e.g. $X\sim\text{Uniform}$ or $X\sim\text{Bernoulli}(n,p)$.
The exact definitions are rather descriptive: For a random variable $X$ with values in $\mathbb{R}$ it is $X\sim\textbf{Uniform}(a,b)$ if its density function $f_X$ is given by $$f_X(x)=\frac{1}{b-a}\chi_{\left[a,b\right]}(x).$$ If $X$ is discrete however, another definition is given, but only if $X$ has values in a finite set.
So, I wondered: The definitions always depend on a random variable $X$ (at least its range). However, we then write $X\sim \dots$$, implying the existence of an equivalence relation and thereby non-dependence of the exact situation. Is there a mathematical object for each important class of distributions that we can check other distributions for equivalence against?