A while back when the HP30S came out, there was some discussion about what its internal workings were. I posted a long item on the newsgroup. You can use Microsoft's BASIC to play around with binary representations. Here is what I posted:
Quote:
I think I can show that the 30S is indeed using binary arithmetic internally.
Fire up one of the old Microsoft GWBASIC versions and use the following little program to
see the behavior of internal binary number representations. Some of the behavior which
would otherwise seem peculiar derives directly from conversions from binary to decimal,
and vice versa.
10 a$ = MKD$(.1)
15 GOSUB 100
20 FOR I = LEN(a$) TO 1 STEP -1
30 PRINT RIGHT$("0" + HEX$(ASC(MID$(a$, I))), 2);
40 NEXT I
50 END
100 GOTO 130
110 MID$(a$, 1) = CHR$(0)
120 MID$(a$, 2) = CHR$(0)
130 PRINT
140 PRINT CVD(a$);
150 PRINT " ";
200 RETURN
The Microsoft Basics use binary arithmetic internally, and there are two floating point
precisions available, single precision and double precision. The single precision uses 4
bytes, 1 for the exponent and its sign, and 3 for the mantissa and its sign. There were
some changes when Quick Basic came out, but I believe the GWBASIC assumed that since the
mantissa is normalized, the most significant bit (MSB) of the mantissa need not be stored
(since in a normalized mantissa, it is always a one; the special case of floating point
zero is represented as an all zero exponent and all zero mantissa). The bit actually
stored in the place of the MSB of the mantissa is the sign of the mantissa. Double
precision uses 8 bytes, 1 for the exponent and its sign, and 7 for the mantissa and its
sign. Thus, single precision means 24 bit precision, and double precision means 56 bit
precision (mantissa precision).
Run the program and see the following result:
.1000000014901161 7D4CCCCD00000000
The first value is the result of putting a single precision value .1 in a double
precision variable. The second item, a hex string, is the internal representation of the
number. Notice that the last 4 bytes are all zeroes, which is what happens when you store
a single precision value in a double precision variable.
The quantity in parentheses in line 10 is .1 as a single precision quantity. Now place
a # after the .1, so line 10 becomes:
10 a$ = MKD$(.1#)
Now we are putting a double precision .1 in the double precision variable. By the way,
there is no explicit double precision variable being used here; the MKD$ function creates
a temporary double precision variable and then converts it to a string which is moved to
the string variable a$ for display.
Run the program and see the following:
.1 7D4CCCCCCCCCCCCD
Now we see the internal double precision (56 bit mantissa) representation of .1--notice
the value of .1 converted to binary has a repeating mantissa, CCCCCCCCC..., and that the
last nib has been rounded up.
Now put the value Joe Horn mentions in another posting as the value the 30S gets for
SQRT(2), namely: 1. 41421 35623 71514 85402 13689, in line 10, thus:
10 a$=MKD$(1.4142135623715148540213689#)
Run the program and get the following:
1.414213562371515 813504333F8FFFF
Notice all the trailing F's. If this binary quantity were rounded up by adding only one
bit in the least significant place, we would have 813504333F90000. Let's see if we can
coax the program into showing us what the floating point value of that binary (represented
as hex here for space saving) string would be. For this we need lines 110 and 120.
Replace line 100 with: 100 REM
Run the program and see:
1.414213562371515 813504F333F90000
GWBASIC converts the 40 bit string 813504F333F90000 (we have 56 bits in the mantissa, but
the last 16 are all zeroes) into the numeric value 1.414213562371515
If the y to the x function on the 30S is used to calculate 2 to the .5, which is also
equal to SQRT(2), we get 1.41421356237309503445232. At this point, I can't use the
GWBASIC program any longer for the next step, since BASIC only has 56 bits in double
precision. Fortunately, Mathematica comes to the rescue. Converting this floating point
value to hexadecimal, we get: B504F333F9DE640000 (without exponent here).
Plainly, this is a mantissa which has been truncated to 56 bits.
If two 12 digit numbers are multiplied on the 30S, such as:
123456789123*987654321999, we get: 121932631357450082816877 which is exactly the correct
result. Similarly for division, addition, and subtraction.
Having reached this point, I remembered the properties of the math coprocessors that used
to be available as separate devices before they were incorporated into the Pentium CPU's.
There were 3 levels of precision available; 24 bit single precision, 56 bit double
precision and 80 bit double extended precision. It seemed possible that the 30S has a
modern CPU with built-in math coprocessor. Those processors provided math functions such
as sqr, sin, cos, log, exp, etc., as well as the 4 basic arithmetic functions +, -, *, /.
The basic four could be done in all 3 precisions, but the higher functions were available
only to a maximum of 56 bits.
So, I thought, let's check some of the other functions. Uh-oh, Ln(2) returns a result
correct to 24 digits, more than is possible with 56 bit arithmetic.
But, Exp(10) gives 22026.465794806716075982, correct to 17 digits, which converts to a
hexadecimal mantissa of 2B053B9F2A0ABF000000 which is a truncated 56 bit value.
(By the way, a binary mantissa of N bits is equivalent to a decimal number of:
N Ln(2)
-------- or N/3.32193 digits, so 80 bits give 80/3.32193= 24.08 digits
Ln(10)
40 bits give 12.04 decimal digits, and 56 bits give 16.85 digits)
Sin(60) (degrees, that is) gives 19 correct digits. Darn.
So, to summarize:
We can often tell how many bits of binary arithmetic are being used by converting a
decimal result back to binary (hex), and seeing if all the bits past a certain point are
truncated. This shows that some of the error seen in decimal results is directly
attributable to truncation error in the binary result, which when converted to decimal is
not recognizable by simple inspection as truncation any more.
The basic four arithmetic functions seem to use 80 bit arithmetic.
SQRT seems to truncate to 40 bits (12 digits)
The built-in keyboard constant Pi is 24 digits.
Other functions don't seem to consistently give a 56 bit truncated result.
Joe Horn is quite right that the calculator will accept 13 digits on input.