You're partially right, but it's worth understanding why we talk about 5,5,5,5,5,5,5,5,5,5 as not being a random outcome. You mentioned entropy in passing, and that's at the root of the matter: specifically, the so-called algorithmic entropy (or information content, or Kolmogorov complexity, or... - it goes by a lot of names!) of the sequence.
In short, it's true that your process is equally likely to produce 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 as it is to produce 9, 4, 3, 3, 6, 2, 10, 7, 5, 9 - but there are a lot more sequences that 'look like' the latter than the former. For instance, there are 10! sequences that consist of the numbers 1..10 in some order, but only 10 sequences that consist of 10 repetitions of the same number, so you can say that you're thousands of times more likely to get some permutation of 1..10 - that is, to get each number once - than to get one number ten times. But the permutations are still a small fraction of the $10^{10}$ different sequences, so you can say that the odds of hitting any permutation are almost vanishingly small; etc.
In general, mathematics formalizes this principle by the aforementioned notion of information content: informally, the notion is 'how easy is it to distinguish this sequence from other sequences?' The 'all 5s' sequence is easy to pick out of a crowd, whereas the 9, 4, 3, 3, ... sequence is a lot more complicated - it has essentially no description that's shorter than just giving the sequence. We can formalize the idea by asking for the smallest program that outputs the given sequence; by this metric, it turns out that almost all sequences are random in the formal sense that any program outputting the sequence must be roughly as long as the sequence itself, and so results like 'all 5s' are so rare that they deserve scrutiny even if they're exactly as likely as any other specific sequence.