Why is it that points in a Cantor set do not require less storage space (encoding bits) than points in a Cantor set with larger Hausdorff dimension?
If 2 Cantor sets have different Hausdorff dimension, then encoding set's points (on a computer file) should require less storage space on the set with lower dimension, same as encoding points in the plane requires double space compared to encoding the same number of points on a line.
So, storage space should depend on the dimension of the set containing the points.
But each point on each Cantor set is equivalent to a point in the other set, so they should require the same storage space to be encoded.
The Cantor set has a tree structure, then encoding a specific point in the tree should not depend on the Hausdorff dimension. A binary string of bits is all that's needed (0 for left branching, and 1 for right branching).
The only difference meanwhile encoding the same point on different Cantor sets should be writing the dimension, but that data is only written once per set, and has no effect on the storage space of each individual point.
![]()
^On this image, storing all the 32 points in the base of the tree requires 32 strings of 5 bits each one, no matter what is the Hausdorff dimension.
If it were an image, I can see that the lower dimensioned set would occupy less percentage of pixels, but that's because information is lost under the pixel size. A similar case is when the coordinates are stored on integer format: on lower dimensions there are more blank spaces, but locating the points require higher precision to avoid losing information, so the storage space saved non encoding blank spaces, is compensated with the extra storage needed for precision.
(I'm a programmer, and not a mathematician, so please, avoid using too much symbolism if possible).