2
$\begingroup$

Suppose you wanted to write the number 100000. If you type it in ASCII, this would take 6 characters (which is 6 bytes). However, if you represent it as unsigned binary, you can write it out using 4 bytes.

(from http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html)

My question: $\log_2 100,000 \approx 17$. So that means I need 17 bits to represent 100,000 in binary, which requires 3 bytes. So why does it say 4 bytes?

3 Answers 3

2

This is more of a computer science/engineering question than a math question.

Look at http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Data/unsigned.html. It asks you to "assume that a typical unsigned int uses 32 bits of memory." Programming languages and processors usually use an even number of bytes to represent data.

2

You can, in fact, write it out using three bytes. My current project uses 3-byte integers extensively, to save memory in an embedded system.

  • 0
    Thanks for the link. I used to work with microprocessors but not microcontrollers, so I usually thought in terms of bigger word sizes. I also understand that some microcontrollers use word sizes that are not multiples of 8 (like some versions of the PIC, which use 12-bit words, if I'm not mistaken).2012-01-13
0

As Joey tels you, the reason is that numbers are usually stored in the data type "integer", which (almost) always comes in 32 bit variants. The processor is taylormade to add/subtract/multiply integers of exactly this size, otherwise, you'll need 32*32*(number of operations) different circuits for every combination of number of bits, which is a huge waste of space.

  • 0
    Well, if you read the question, you will see that in THIS instance, an int is exactly 32 bit = 4 bytes. No need to make things complicated.2012-01-15