You are correct that the so-called frequentist view of probability
is based on the idea of a repeatable experiment. For example, the
idea that a coin has probability 1/2 of showing Heads is interpreted
to mean that over the long run the ratio of heads to the number of
tosses converges to 1/2. (A formal statement is the Law of Large Numbers.)
Notice that in my example both Heads and Tails get repeated many times.
Also if you make measurement of the heights of a large number of people
from a particular population,
you may conclude that about 95% of them have heights between 60 and 75
inches. You might write this as $P(60 \le X \le 75) = .95,$ where $X$
represents the height of a randomly chosen person. You might measure
heights to the nearest inch or to the nearest tenth of an inch, but
either way you will get a lot of tied heights in a very large sample.
And in a huge population it would hardly matter for practical purposes
if someone happened to get measured twice.
(If you could measure to any desired degree of accuracy, you would never
get exactly the same result twice, but that would hardly be of
practical value in understanding average heights of people or what fraction
of the people are more than six feet tall.)
In practice, of course, we never actually toss a coin an infinite number
of times or measure heights of an infinite number of people. But
imagining we might do an experiment an infinite number of times
makes it easier to deal with some theoretical matters.
If you have a small population, then it is important to decide whether
you are allowed to select a given element of the population more than once.
There are somewhat different probability models depending on whether
sampling is with replacement or without replacement. For many purposes,
the difference isn't important unless the sample size $n$ is more than
10% of the population size $N.$
I understand that this is a "fuzzy" and intuitive answer, but I hope
it helps for now as you get started studying probability and statistics.