I am using the standard Shannon entropy formula for calculating the entropy of a system at different states. The system has a different number of possible outcomes at each state, in other words the alphabet of the discrete random variable has a different size at each state. The maximum entropy at each state is $\log N$, where $N$ is the number of possible outcomes. This means that for every state I get an entropy value for a different range ($[0,\log N]$). Is it reasonable to (linearly) map entropy values to a common range (e.g. $[0,1]$) so that I can get the entropy difference between states or even calculate the average entropy of all states?
To give a simple but illustrative example, let's say that someone can play a number of games each day. On Monday she can play basketball or football, so $N=2$ for Monday, and on Tuesday basketball or tennis or cricket, so $N=3$ for Tuesday. Then if you collect data about the actual outcomes for several weeks and calculate the entropy for each day, 'Monday's entropy' will be in the range $[0,\log 2]$ and 'Tuesday's entropy' in the range $[0, \log 3]$. What would be the best way to compare these values? Is linear mapping to a common range correct?