|Re: Understanding HP-16C integer division|
Message #13 Posted by Dale Reed on 16 Oct 2012, 10:50 p.m.,
in response to message #12 by Marcus von Cube, Germany
Marcus has hit the nail on the head. Neil, look at your own 1's complement and 2's complement bit patterns. In a "simple binary adder", the 1's complement case implies that -0 + 1 = 0 !!! (The bit pattern wraps from 1111 to 0000, kind of like 999 + 1 = (1)000, using a 3-digit decimal adder.) In 2's complement, the "simple binary adder" correctly says that -1 + 1 = 0 (same bit patterns). This behavior of 2's complement addition/subtraction close to zero is what's important, and keeps the hardware simple and fast.
Notice, however, with 2's complement, what happens at the 7 + 1 = -8 end, though. You simply MUST be aware of the word length and representation. I've seen any number of cases of code where the programmer was using an 8-bit signed (2's complement) integer and wrote an expression like:
IF (x = 200) THEN ..... END_IF;
There are two problems here. First, the Editor/Compiler in Question is not smart enough to even warn about this, even though it "knows" the Type of "x". Second, and more important, the Implementation in Question treats "200" as a 32-bit signed integer constant, and it promotes the x operand to a 32-bit signed integer before it performs the operation. x is always promoted by "sign extension" -- taking "bit 7" (the high-order bit of the 8-bit integer) and copying it out to the new bits of higher order, thus maintaining its arithmetic value. So the comparison is comparing x (which is always in the range -128 .. 127) to 200, which will never evaluate true.
The first edition of Kernighan and Ritchie "The C Programming Language" kind of "punted" (and told the truth about varied implementations at the time) and stated that the only thing you can infer about integer types from the language definition (not knowing the details of the language implementation) is that a "long int" is "at least as long" as an "int", and an "int" is "at least as long" as a "short int". Ever since, people have debated in standards committees (so, ANSI-C) and written (fairly confusing, in some implementations) header files and compiler options to determine the sizes of these integers.
When the industrial Programmable Controller language standard IEC-61131-3 was written, they cast it in stone: SINT (short signed integer) = 8 bits, INT = 16 bits, DINT (double integer) = 32 bits, and LINT (long integer) = 64 bits. These are all 2's complement signed, to get back on topic....