0
$\begingroup$

From what I understand of Shanon Entropy, we want when dealing with a random variable, to be able to quantify essentially how far from uniformly distributed our random variable is.

In the case of a RV with two possible events (a coin flip, for ex), Entropy has the nice property that its range is continuous over the interval $[0, 1]$.

However, if we deal with more than two events (for ex die roll RV with outcomes form $1$ through $6$), then the range of our Entropy function is scaled to $[0, -\log_2\big({\frac{1}{6}}\big)]$.

Would it make sense (and still be "correct") to adjust the base of the $\log$, in this case to $\log_6$, to match the number of events in the RV so as to keep the range of our Entropy function as the interval $[0, 1]$?

Is this something that's commonly done, or are there reasons not to do it?

1 Answers 1

1

You shouldn't do that. We want the entropy to be an invariant of conjugacy, in your case of the conjugacy by measurable maps with measurable inverse taking distributions to distributions. In case we started doing what you suggest we would loose this "universal" nature of entropy.

Of course, you could do that in each specific problem, but really it is only a matter of multiplying by a constant, and you would be changing a universal convention thus confusing everybody. :)

A second (less important) main reason is that it is really a choice of ourselves what is the sample space. In your example of two events, you could consider say the particular subset of when the outcomes sum at most $8$ and accordingly you would need to rescale the entropy.