I'm currently studying basic statistics, and I don't really understand the definition of random sample in the book I'm reading (Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences, 4th Edition).
The book defines random samples as follows:
[...] the term "random sample" is used in three different but closely related ways in applied statistics. It may refer to the objects selected for study, to the random variables associated with the objects to be selected, or to the numerical values assumed by those variables.
A random sample of size $n$ from the distribution of $X$ is a collection of $n$ independent random variables, each with the same distribution as $X$.
[...] The objects selected generate $n$ numbers $x_1,x_2,x_3,\ldots,x_n$ which are the observed values of the random variables $X_1,X_2,X_3,\ldots,X_n$.
At this point, I don't really see the point of thinking of a random sample as a collection of random variables, considering the data gathered for an experiment is a set of constants.
A statistic is defined by the book as "a random variable whose numerical value can be determined from a random sample". As an example, the sample mean statistic of a sample (a set of random variables) $\{1, 2, 3\}$ is defined as $$ \bar{X} = \frac{X_1 + \cdots + X_n}{n} $$ where $X_{1..n}$ are random variables (which means that $\bar{X}$ is also a random variable), as opposed to $$ \bar{x} = \frac{x_1 + \cdots + x_n}{n} $$ where $x_{1..n}$ are constant numbers (which means that $\bar{x}$ is a single number).
Now, since $X_1, \ldots, X_n$ are random variables but each have a single, known value ($X_i$ assumes the value $x_i$), $\bar{X}$ is also a random variable which assumes a known value -- i.e., $(x_1 + \cdots + x_n)/n$. (Please correct me if I'm wrong here.)
Here's my confusion. I think I've understood the concept of statistics and estimators -- to provide an estimation for population parameters using the characteristics of a sample. However, I don't see a reason behind thinking of a random sample as a set of random variables -- as opposed to the (in my eyes) more "natural" way of thinking of them as simply a set of numerical constants -- or thinking of a statistic like the sample mean, median, range, etc. as a random variable rather than a single, constant numerical value.
This is my first post here, so the question probably has problems with structure, length and clarity, but what I'm essentially asking is for some "justification" for thinking of a) a random sample as a set of random variables as opposed to a set of numbers, and b) a statistic as a random variable rather than a constant.