This ties back to a SO question, where someone wanted to increase performance of an algorithm, and part of the solution was "test the most likely situation first". The question involved checking for the presence of one or more of a particular subset of base-10 digits, and so the answer involved first testing that each digit wasn't in the subset, because that was more likely.
This got me thinking. In the infinite set of all natural base-10 numbers, is any single digit more prevalent than any other? And a corollary: is the answer to that question different for a different number base (such as binary, octal, or hexadecimal)?
The common-sense answer to both questions would be "no". However, a simple exercise in binary numbers would demonstrate that the digit 1 is more common than the digit 0 in the significand of a fixed set of binary digits:
0 1 10 11 //maximum value of 2 bits; there have been 4 1s and 2 0s. 100 101 110 111 //maximum value of 3 bits; 12 1s and 6 0s have occurred. 1000 1001 1010 1011 1100 1101 1110 1111 //maximum value of four bits; 32 1s and 18 0s have occurred.
I know enough about math to know the common-sense answer is often wrong; for instance, trans-infinite numbers defining differing cardinalities of "infinite" sets. There are also pseudo-rules that look like a sure thing until the problem space becomes large; you can already see just through four bits that, while 1 is more common than 0, 0 is occurring more often as the number of bits in the significand increases. So, I thought I'd ask and see if anyone else could prove or disprove it in the general case.