I found a question that asks for a method to copy a 16-bit sign and magnitude integer to an 8-bit sign and magnitude integer. Is that even possible? Could someone please explain how to do this?
Converting 16-bit integer to 8-bit integer?
2 Answers
An 8-bit integer where the first bit is reserved for sign can encode up to $2^7$ different objects. We would assume that we would encode all the integers from 0 to 127 with them. More importantly, if we chose say 500 different integers, one cannot encode all of them with 8 bits and preserve all the information.
With 15 bits, one can encode up to $2^{15}$ different things.
Instead of belaboring that, consider instead the alternative: one can store 16 bit integers into 8 bit integers. So take the maximum value storeable in a 16 bit integer and 'condense' it into an 8 bit integer. If we then take the 8 bit integer to be the latter half of a 16 bit integer, we can count higher until we hit a limit - another 'maximum storeable value' in a 16 bit integer. Clearly, these two 'maximum storeable values' must be equal!
Clearly this is impossible in general, as explained by @mixedmath. However, it's quite reasonable to convert 16 bit values that 'fit' into 8 bits, eg 1, -34, 127, etc.
The way to do this may be less than obvious if by 'sign and magnitude' you mean 2's complement (Wikipedia). For example:
- 1 (decimal) =
0001
(16 bit hex) =01
(8 bit hex) - 127 =
008F
=8F
- -1 =
FFFF
=FF
- -128 =
FF80
=80
(This one always surprises me...)
So the conversion method is easy - simply truncate to the bits you need. Such useful properties of 2's complement are why it is used.
-
1eh your 127 is wrong it should be `0x7f` – 2011-11-06