1
$\begingroup$

I know there are already threads that talk about Excess notation - but they don't really clarify anything for me... What is Excess notation exactly (at a machine level)? And why, and how, would we ever use it?

Here is the page from my textbook that is supposed to explain it: Excess System page from Foundations of Computer Science, Forouzan

As I understand it, Excess notation takes N number of bits available in memory, let's say 4: .1.2.3.4.; and then allows you to - magically - store 2^(N-1)-1 bit-units inside of those 4 (N) bits?????? Which makes NO sense, because it physically can't happen. So, if N=4: 2^(4-1)-1 = 7

Which means you can store 7 4-bit units in 4 bits?????? (.1.2.3.4])*7 -> (.1.2.3.4.)? That's physically impossible, right?

But, putting that aside, you can now take the smallest of these 7 4-bit units; which is "0000"; and assign the decimal value "-[2^(N-1)-1]" or "-7" to it... which makes "0001" = decimal -6; "0010" = decimal -5; and so on - because Excess notation is sequential/uniform. Even though these binary values in no way represent the corresponding decimal value.

So I figured maybe it's a kind of encoding where "7" is a fixed value, but that can't be true either because you can shift the range: "0000" doesn't have to be "-[2^(N-1)-1]", you can add or subtract from it to represent whichever number you please and somehow this will make perfect sense in machine operations as a binary representation? You can go from having "0000", "0001", "0010" representing "-7", "-6", "-5" to it representing "0", "1", "2" again. Maybe it uses a combination of binary addition/subtraction and 2's completing, but then how would the machine remember where relative 0 is?

I am obviously very confused by this, and Googling "Excess notation" only brought up one relevant thread: "math.stackexchange.com/how-to-actually-use-excess-n-representation-in-binaries" - which gives examples of the representation, but does not explain how it is possible. I also watched this video a video "Binary 10: Excess notation" where "1000" is the smallest instead of "0000". That did you explain how it is possible either. I attempted to read the Wikipedia page on Excess-3, but it is very cluttered and ambiguous as well.

I'll appreciate any help with this, really. Maybe it's something obvious that I'm just not seeing.

My questions are: 1) How can you store 2^(n-1)-1possible combinations of N bits in a memory space that only has N bits available, to begin with? 2) How can you assign a value to a binary sequence without encoding it? And if it is being encoded how can you just shift it about as you please?

  • 0
    $n$ bits of entropy enables you to distinguish between up to $2^n-1$ objects by assigning each of them a unique binary sequence of length $n$. I don't see any other interpretation of what you've said that makes any sense. (By the way, an application of this class of ideas: you can simulate rolling dice of a shape you don't have by using dice you do have. For example, to roll a d8, you can roll 2d6, multiply the results, try again if you got above a 32, and when you get a result of at most 32, take its remainder after division by 8 as your roll.)2017-02-23
  • 0
    What do you mean by entropy? Sorry.2017-02-23
  • 0
    You can replace "entropy" with "information", it's the same concept, just a bit more specific. (Also I stupidly forgot about the fact that you can use all 0s, so actually $n$ bits lets you distinguish between $2^n$ objects.)2017-02-23
  • 0
    So, it's not a matter of memory only having N bits available, rather it is that we use N bit-units i.e. 8 bit-units (or a byte) to store 8 different permutations? And then my follow-up question: How will the machine make sense of these? Because these do not calculate to their corresponding decimal values and they do not follow binary arithmetic.2017-02-23
  • 0
    Let's say I have 4 bits to use to distinguish exponents in my weird homebrew floating point system. I could use a signed integer scheme: the simple way to do that is to take one bit for the sign and then have three bits to distinguish between absolute values. That makes it so you can represent -7,-6,-5,...,5,6,7 (0 gets two representations in this system).2017-02-23
  • 0
    (Cont.) IEEE-style floating point doesn't do this. Instead, it just shifts everything to start at 0. In the process you get 16 representable exponents (we had 15 before, because 0 was double-counted). To interpret them, it decides more or less arbitrarily to shift down by 7, so that we have -7,-6,...,8 as our possible exponents. (This is built into how the arithmetic routines like + and * work). We could've shifted by 8 instead, to get -8,-7,...,7 as our possible exponents, this is the "bias".2017-02-23
  • 0
    **But if 0 = "1000" for example how will the machine interpret it?** With unsigned binary, the interpretation is straight-forward i.e. "1000" = 8 and binary arithmetic still follows because "1000"-"0001" = "0111" or "8"-"1" = "7". With signed binary it also still follow, you just work with the first N-1 bits from the right, while the Nth bit represents the sign. One's and two's completion can be converted using logic operators and once they are converted binary arithmetic works again. But this system is completely arbitrary so how is it read?2017-02-23
  • 0
    It's built into the arithmetic operations. You'd have to look into their implementations if you wanted to see details, but they will necessarily involve both the exponent and the mantissa.2017-02-23
  • 0
    That said, the system isn't arbitrary: you can *think* of the operations as just being "take the binary expansion of the unsigned integer and subtract 1023 from that unsigned integer to get the exponent". But the internal implementation in hardware is likely to be more complicated.2017-02-23
  • 0
    Thank you for your time and your assistance :)2017-02-23

1 Answers 1

0

2^(m-1)-1 represents a pointer; called a bias; that is used to store and find values from a set of negative-to-positive decimal integers (SET A), while dealing with these integers as if they were a set of binary integers (SET B) that are positive, unsigned and have a bit allocation of m. The number of values in the set is defined by 2^m, thus, if m=3 the set has a range of 8 possible, total values; this is true for both SET A and SET B.

If m=3...

SET A: (-3)(-2)(-1)(0)(1)(2)(3)(4) 8 total values.

INDEX: (0)(1)(2)(3)(4)(5)(6)(7) 8 total values.

SET B: ("000")("001")("010")("011")("100")("101")("110")("111") 8 total values.

The values in SET A; the negative-positive integers; are defined by (index value)-bias. e.g. index:0 - bias:3 = negative integer: -3

The bias number is also the index of the zero value in the negative-positive integer set.

Which means, if we invert the equation, we can find the index via (SET A value)+bias.

SET A ranges from negative (2^(m-1))-1 (the lowest limit) to positive ((2^m)/2) (the highest limit).

Using this method, of translating one set of numbers to another, negative integers can be handled by computers in binary.

Example 1:

If we want to store -1 as an unsigned, binary integer with 3 bits. As long as the computer can recall or calculate the bias, which is 3 in this case, it can store and find -1 as the binary sequence: "010" at index: "2", because "010" has binary value "2". i.e. (-1)+3=2, and the 3-bit binary value for 2 = "010".

If the computer was then told to look up the 3-bit binary value "010", which has value "2", to find the negative-positive integer that it represents it would apply the bias inversely: "010" = 2, and bias=3, therefore 2-3=(-1).

Example 2:

If we want to store -5 as an unsigned, binary integer with 8 bits.

Bias: (2^(8-1)-1)=127.

Lowest limit: negative (2^(8-1))-1=-127, so 127 negative values.

Highest limit: positive ((2^8)/2)=128, so 128 positive values.

Zero value: 127, so 1 zero value.

Set range: (2^8)=256, 127+128+1.

Apply the bias to -5: -5+127=122, index 122 is the binary value "122" as "01111010".